Test Report: Docker_Linux_crio 17965

                    
                      5e5f17cf679477cd200ce76c4e9747d73049443e:2024-01-16:32726
                    
                

Test fail (3/320)

Order failed test Duration
39 TestAddons/parallel/Ingress 155.07
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 180.94
221 TestMultiNode/serial/PingHostFrom2Pods 3.21
x
+
TestAddons/parallel/Ingress (155.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-411655 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-411655 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-411655 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7cfd8aee-aed9-4122-b42e-413784a0ed65] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7cfd8aee-aed9-4122-b42e-413784a0ed65] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.003544958s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-411655 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.929241994s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-411655 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-411655 addons disable ingress-dns --alsologtostderr -v=1: (1.480439217s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-411655 addons disable ingress --alsologtostderr -v=1: (7.639793171s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-411655
helpers_test.go:235: (dbg) docker inspect addons-411655:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6c8900ced86d05d54b8a705a2e90759b6992190aa1282aa5bdf448757002050",
	        "Created": "2024-01-16T02:37:27.184964766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 452804,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T02:37:27.439585751Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e6c8900ced86d05d54b8a705a2e90759b6992190aa1282aa5bdf448757002050/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6c8900ced86d05d54b8a705a2e90759b6992190aa1282aa5bdf448757002050/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6c8900ced86d05d54b8a705a2e90759b6992190aa1282aa5bdf448757002050/hosts",
	        "LogPath": "/var/lib/docker/containers/e6c8900ced86d05d54b8a705a2e90759b6992190aa1282aa5bdf448757002050/e6c8900ced86d05d54b8a705a2e90759b6992190aa1282aa5bdf448757002050-json.log",
	        "Name": "/addons-411655",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-411655:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-411655",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0b1bb47ec54a47abc996447b89f4a3d6c937b96cc72d0b1f08ba79bef2cba90a-init/diff:/var/lib/docker/overlay2/bba00fb4c7e32355be8b1614d52104fcb5f09794e9ed4467560e2767dcfd351b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b1bb47ec54a47abc996447b89f4a3d6c937b96cc72d0b1f08ba79bef2cba90a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b1bb47ec54a47abc996447b89f4a3d6c937b96cc72d0b1f08ba79bef2cba90a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b1bb47ec54a47abc996447b89f4a3d6c937b96cc72d0b1f08ba79bef2cba90a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-411655",
	                "Source": "/var/lib/docker/volumes/addons-411655/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-411655",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-411655",
	                "name.minikube.sigs.k8s.io": "addons-411655",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a5083c4d8a8420038fa6022e5b595a3d679d63b7820039028940e116fcc1b441",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33207"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a5083c4d8a84",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-411655": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e6c8900ced86",
	                        "addons-411655"
	                    ],
	                    "NetworkID": "38c8164ff27454a17d628fe5fc54855f40862fcd956183ab9338308fa58058cd",
	                    "EndpointID": "d8cdd155e642573eb674f9e159ebd982949429878fb899dc5429e500c6e16534",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-411655 -n addons-411655
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-411655 logs -n 25: (1.204627093s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-198405                                                                     | download-only-198405   | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:37 UTC |
	| delete  | -p download-only-734827                                                                     | download-only-734827   | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:37 UTC |
	| start   | --download-only -p                                                                          | download-docker-254883 | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC |                     |
	|         | download-docker-254883                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-254883                                                                   | download-docker-254883 | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:37 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-207552   | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC |                     |
	|         | binary-mirror-207552                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41525                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-207552                                                                     | binary-mirror-207552   | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:37 UTC |
	| addons  | disable dashboard -p                                                                        | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC |                     |
	|         | addons-411655                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC |                     |
	|         | addons-411655                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-411655 --wait=true                                                                | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-411655 addons disable                                                                | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:39 UTC | 16 Jan 24 02:39 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-411655 addons                                                                        | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:39 UTC | 16 Jan 24 02:39 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-411655 ip                                                                            | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:39 UTC | 16 Jan 24 02:39 UTC |
	| addons  | addons-411655 addons disable                                                                | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:39 UTC | 16 Jan 24 02:39 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:39 UTC | 16 Jan 24 02:39 UTC |
	|         | addons-411655                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-411655 ssh curl -s                                                                   | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:40 UTC | 16 Jan 24 02:40 UTC |
	|         | -p addons-411655                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-411655 ssh cat                                                                       | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:40 UTC | 16 Jan 24 02:40 UTC |
	|         | /opt/local-path-provisioner/pvc-b9b3dea2-21d2-4d07-abee-78e9d4e666b6_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-411655 addons disable                                                                | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:40 UTC | 16 Jan 24 02:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:40 UTC | 16 Jan 24 02:40 UTC |
	|         | -p addons-411655                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:40 UTC | 16 Jan 24 02:40 UTC |
	|         | addons-411655                                                                               |                        |         |         |                     |                     |
	| addons  | addons-411655 addons                                                                        | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:41 UTC | 16 Jan 24 02:41 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-411655 addons                                                                        | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:41 UTC | 16 Jan 24 02:41 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-411655 ip                                                                            | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:42 UTC | 16 Jan 24 02:42 UTC |
	| addons  | addons-411655 addons disable                                                                | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:42 UTC | 16 Jan 24 02:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-411655 addons disable                                                                | addons-411655          | jenkins | v1.32.0 | 16 Jan 24 02:42 UTC | 16 Jan 24 02:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:37:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:37:05.710821  452136 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:37:05.710958  452136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:37:05.710964  452136 out.go:309] Setting ErrFile to fd 2...
	I0116 02:37:05.710972  452136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:37:05.711195  452136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
	I0116 02:37:05.711830  452136 out.go:303] Setting JSON to false
	I0116 02:37:05.712834  452136 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8372,"bootTime":1705364254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:37:05.712925  452136 start.go:138] virtualization: kvm guest
	I0116 02:37:05.715201  452136 out.go:177] * [addons-411655] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:37:05.716872  452136 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:37:05.716863  452136 notify.go:220] Checking for updates...
	I0116 02:37:05.718639  452136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:37:05.720347  452136 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:37:05.721890  452136 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	I0116 02:37:05.723243  452136 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:37:05.724693  452136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:37:05.726320  452136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:37:05.749047  452136 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:37:05.749206  452136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:37:05.799242  452136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-16 02:37:05.791499657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:37:05.799343  452136 docker.go:295] overlay module found
	I0116 02:37:05.801055  452136 out.go:177] * Using the docker driver based on user configuration
	I0116 02:37:05.802510  452136 start.go:298] selected driver: docker
	I0116 02:37:05.802519  452136 start.go:902] validating driver "docker" against <nil>
	I0116 02:37:05.802530  452136 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:37:05.803268  452136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:37:05.854077  452136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-16 02:37:05.846113615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:37:05.854233  452136 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:37:05.854451  452136 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:37:05.856179  452136 out.go:177] * Using Docker driver with root privileges
	I0116 02:37:05.857535  452136 cni.go:84] Creating CNI manager for ""
	I0116 02:37:05.857559  452136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 02:37:05.857569  452136 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:37:05.857579  452136 start_flags.go:321] config:
	{Name:addons-411655 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-411655 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:37:05.859082  452136 out.go:177] * Starting control plane node addons-411655 in cluster addons-411655
	I0116 02:37:05.860408  452136 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 02:37:05.861865  452136 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:37:05.863316  452136 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:37:05.863357  452136 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 02:37:05.863367  452136 cache.go:56] Caching tarball of preloaded images
	I0116 02:37:05.863419  452136 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:37:05.863466  452136 preload.go:174] Found /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:37:05.863476  452136 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:37:05.863807  452136 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/config.json ...
	I0116 02:37:05.863831  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/config.json: {Name:mkc6eb6a94cf435e9833630451c48279cc5cb2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:05.878793  452136 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 02:37:05.878936  452136 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 02:37:05.878956  452136 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0116 02:37:05.878962  452136 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0116 02:37:05.878973  452136 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 02:37:05.878985  452136 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0116 02:37:17.427942  452136 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0116 02:37:17.427995  452136 cache.go:194] Successfully downloaded all kic artifacts
	I0116 02:37:17.428055  452136 start.go:365] acquiring machines lock for addons-411655: {Name:mke2d5d12be3b106331363fdbbe7d3065434083e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:37:17.428171  452136 start.go:369] acquired machines lock for "addons-411655" in 90.346µs
	I0116 02:37:17.428204  452136 start.go:93] Provisioning new machine with config: &{Name:addons-411655 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-411655 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:37:17.428327  452136 start.go:125] createHost starting for "" (driver="docker")
	I0116 02:37:17.430442  452136 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0116 02:37:17.430688  452136 start.go:159] libmachine.API.Create for "addons-411655" (driver="docker")
	I0116 02:37:17.430730  452136 client.go:168] LocalClient.Create starting
	I0116 02:37:17.430839  452136 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem
	I0116 02:37:17.581078  452136 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem
	I0116 02:37:18.032815  452136 cli_runner.go:164] Run: docker network inspect addons-411655 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 02:37:18.048034  452136 cli_runner.go:211] docker network inspect addons-411655 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 02:37:18.048100  452136 network_create.go:281] running [docker network inspect addons-411655] to gather additional debugging logs...
	I0116 02:37:18.048119  452136 cli_runner.go:164] Run: docker network inspect addons-411655
	W0116 02:37:18.062889  452136 cli_runner.go:211] docker network inspect addons-411655 returned with exit code 1
	I0116 02:37:18.062923  452136 network_create.go:284] error running [docker network inspect addons-411655]: docker network inspect addons-411655: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-411655 not found
	I0116 02:37:18.062934  452136 network_create.go:286] output of [docker network inspect addons-411655]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-411655 not found
	
	** /stderr **
	I0116 02:37:18.063045  452136 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:37:18.077874  452136 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020e1a70}
	I0116 02:37:18.077918  452136 network_create.go:124] attempt to create docker network addons-411655 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0116 02:37:18.077964  452136 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-411655 addons-411655
	I0116 02:37:18.127497  452136 network_create.go:108] docker network addons-411655 192.168.49.0/24 created
	I0116 02:37:18.127540  452136 kic.go:121] calculated static IP "192.168.49.2" for the "addons-411655" container
	I0116 02:37:18.127605  452136 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 02:37:18.142971  452136 cli_runner.go:164] Run: docker volume create addons-411655 --label name.minikube.sigs.k8s.io=addons-411655 --label created_by.minikube.sigs.k8s.io=true
	I0116 02:37:18.159373  452136 oci.go:103] Successfully created a docker volume addons-411655
	I0116 02:37:18.159484  452136 cli_runner.go:164] Run: docker run --rm --name addons-411655-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-411655 --entrypoint /usr/bin/test -v addons-411655:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 02:37:21.942374  452136 cli_runner.go:217] Completed: docker run --rm --name addons-411655-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-411655 --entrypoint /usr/bin/test -v addons-411655:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (3.782839701s)
	I0116 02:37:21.942407  452136 oci.go:107] Successfully prepared a docker volume addons-411655
	I0116 02:37:21.942435  452136 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:37:21.942461  452136 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 02:37:21.942526  452136 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-411655:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 02:37:27.117348  452136 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-411655:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.174777913s)
	I0116 02:37:27.117387  452136 kic.go:203] duration metric: took 5.174923 seconds to extract preloaded images to volume
	W0116 02:37:27.117550  452136 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 02:37:27.117702  452136 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 02:37:27.170237  452136 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-411655 --name addons-411655 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-411655 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-411655 --network addons-411655 --ip 192.168.49.2 --volume addons-411655:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 02:37:27.447230  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Running}}
	I0116 02:37:27.464945  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:27.482078  452136 cli_runner.go:164] Run: docker exec addons-411655 stat /var/lib/dpkg/alternatives/iptables
	I0116 02:37:27.521956  452136 oci.go:144] the created container "addons-411655" has a running status.
	I0116 02:37:27.521996  452136 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa...
	I0116 02:37:27.674301  452136 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 02:37:27.695138  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:27.713280  452136 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 02:37:27.713306  452136 kic_runner.go:114] Args: [docker exec --privileged addons-411655 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 02:37:27.773733  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:27.789095  452136 machine.go:88] provisioning docker machine ...
	I0116 02:37:27.789138  452136 ubuntu.go:169] provisioning hostname "addons-411655"
	I0116 02:37:27.789205  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:27.806523  452136 main.go:141] libmachine: Using SSH client type: native
	I0116 02:37:27.807032  452136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I0116 02:37:27.807051  452136 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-411655 && echo "addons-411655" | sudo tee /etc/hostname
	I0116 02:37:27.807767  452136 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0116 02:37:30.950679  452136 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-411655
	
	I0116 02:37:30.950788  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:30.966673  452136 main.go:141] libmachine: Using SSH client type: native
	I0116 02:37:30.967009  452136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I0116 02:37:30.967027  452136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-411655' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-411655/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-411655' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:37:31.100212  452136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:37:31.100240  452136 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17965-443749/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-443749/.minikube}
	I0116 02:37:31.100298  452136 ubuntu.go:177] setting up certificates
	I0116 02:37:31.100319  452136 provision.go:83] configureAuth start
	I0116 02:37:31.100384  452136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-411655
	I0116 02:37:31.116316  452136 provision.go:138] copyHostCerts
	I0116 02:37:31.116392  452136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem (1123 bytes)
	I0116 02:37:31.116507  452136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem (1675 bytes)
	I0116 02:37:31.116575  452136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem (1078 bytes)
	I0116 02:37:31.116638  452136 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem org=jenkins.addons-411655 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-411655]
	I0116 02:37:31.401523  452136 provision.go:172] copyRemoteCerts
	I0116 02:37:31.401609  452136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:37:31.401657  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:31.417696  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:31.513071  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 02:37:31.535632  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 02:37:31.558200  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 02:37:31.580645  452136 provision.go:86] duration metric: configureAuth took 480.309451ms
	I0116 02:37:31.580674  452136 ubuntu.go:193] setting minikube options for container-runtime
	I0116 02:37:31.580863  452136 config.go:182] Loaded profile config "addons-411655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:37:31.580980  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:31.597297  452136 main.go:141] libmachine: Using SSH client type: native
	I0116 02:37:31.597639  452136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I0116 02:37:31.597658  452136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:37:31.821376  452136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:37:31.821410  452136 machine.go:91] provisioned docker machine in 4.032292297s
	I0116 02:37:31.821419  452136 client.go:171] LocalClient.Create took 14.390680448s
	I0116 02:37:31.821437  452136 start.go:167] duration metric: libmachine.API.Create for "addons-411655" took 14.390750099s
	I0116 02:37:31.821447  452136 start.go:300] post-start starting for "addons-411655" (driver="docker")
	I0116 02:37:31.821461  452136 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:37:31.821523  452136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:37:31.821565  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:31.837812  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:31.937263  452136 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:37:31.940269  452136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 02:37:31.940308  452136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 02:37:31.940317  452136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 02:37:31.940324  452136 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 02:37:31.940335  452136 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-443749/.minikube/addons for local assets ...
	I0116 02:37:31.940393  452136 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-443749/.minikube/files for local assets ...
	I0116 02:37:31.940417  452136 start.go:303] post-start completed in 118.963481ms
	I0116 02:37:31.940696  452136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-411655
	I0116 02:37:31.956527  452136 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/config.json ...
	I0116 02:37:31.956802  452136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:37:31.956856  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:31.972406  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:32.065118  452136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 02:37:32.069150  452136 start.go:128] duration metric: createHost completed in 14.640806748s
	I0116 02:37:32.069177  452136 start.go:83] releasing machines lock for "addons-411655", held for 14.640991882s
	I0116 02:37:32.069247  452136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-411655
	I0116 02:37:32.084765  452136 ssh_runner.go:195] Run: cat /version.json
	I0116 02:37:32.084782  452136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:37:32.084823  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:32.084834  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:32.101665  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:32.102544  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:32.277185  452136 ssh_runner.go:195] Run: systemctl --version
	I0116 02:37:32.281325  452136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:37:32.416914  452136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:37:32.421217  452136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:37:32.438662  452136 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 02:37:32.438750  452136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:37:32.464718  452136 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 02:37:32.464741  452136 start.go:475] detecting cgroup driver to use...
	I0116 02:37:32.464782  452136 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 02:37:32.464841  452136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:37:32.478544  452136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:37:32.488121  452136 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:37:32.488170  452136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:37:32.499774  452136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:37:32.512181  452136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:37:32.588787  452136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:37:32.669220  452136 docker.go:233] disabling docker service ...
	I0116 02:37:32.669296  452136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:37:32.686810  452136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:37:32.697364  452136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:37:32.769853  452136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:37:32.857027  452136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:37:32.867192  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:37:32.881657  452136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:37:32.881715  452136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:37:32.890102  452136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:37:32.890162  452136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:37:32.898746  452136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:37:32.907152  452136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:37:32.915713  452136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:37:32.923620  452136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:37:32.930835  452136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:37:32.938317  452136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:37:33.012365  452136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:37:33.116344  452136 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:37:33.116423  452136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:37:33.119810  452136 start.go:543] Will wait 60s for crictl version
	I0116 02:37:33.119854  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:37:33.122934  452136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:37:33.156360  452136 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 02:37:33.156448  452136 ssh_runner.go:195] Run: crio --version
	I0116 02:37:33.190870  452136 ssh_runner.go:195] Run: crio --version
	I0116 02:37:33.226636  452136 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0116 02:37:33.228238  452136 cli_runner.go:164] Run: docker network inspect addons-411655 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:37:33.244396  452136 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0116 02:37:33.247955  452136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:37:33.258221  452136 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:37:33.258280  452136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:37:33.312627  452136 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 02:37:33.312650  452136 crio.go:415] Images already preloaded, skipping extraction
	I0116 02:37:33.312693  452136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:37:33.344276  452136 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 02:37:33.344302  452136 cache_images.go:84] Images are preloaded, skipping loading
	I0116 02:37:33.344359  452136 ssh_runner.go:195] Run: crio config
	I0116 02:37:33.384564  452136 cni.go:84] Creating CNI manager for ""
	I0116 02:37:33.384585  452136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 02:37:33.384604  452136 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:37:33.384624  452136 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-411655 NodeName:addons-411655 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:37:33.384744  452136 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-411655"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:37:33.384803  452136 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-411655 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-411655 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:37:33.384848  452136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:37:33.393013  452136 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:37:33.393074  452136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:37:33.400524  452136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0116 02:37:33.415988  452136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:37:33.431726  452136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0116 02:37:33.447578  452136 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0116 02:37:33.450798  452136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:37:33.461154  452136 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655 for IP: 192.168.49.2
	I0116 02:37:33.461194  452136 certs.go:190] acquiring lock for shared ca certs: {Name:mk8883b8c07de4938a73ea389443b00589415803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:33.461317  452136 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key
	I0116 02:37:33.691678  452136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt ...
	I0116 02:37:33.691715  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt: {Name:mkbcbab1cd9596ad0c71dc2b2c21541bc956582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:33.691910  452136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key ...
	I0116 02:37:33.691922  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key: {Name:mk9bbe300cc5520c469df382300bb0caf5a1c78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:33.691998  452136 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key
	I0116 02:37:33.984980  452136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.crt ...
	I0116 02:37:33.985014  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.crt: {Name:mk837666c161d5d54173aa65dbd1fdb824803b9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:33.985172  452136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key ...
	I0116 02:37:33.985183  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key: {Name:mk7571c7bc4486762d10b9c435ee7e7c3e8ef2df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:33.985283  452136 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.key
	I0116 02:37:33.985296  452136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt with IP's: []
	I0116 02:37:34.061818  452136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt ...
	I0116 02:37:34.061851  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: {Name:mk56e81044ec51bfb7aa2aa864657648befeb409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:34.062014  452136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.key ...
	I0116 02:37:34.062026  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.key: {Name:mkc0745bd54d22e01c5bd085b2e085e55a63cf09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:34.062095  452136 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.key.dd3b5fb2
	I0116 02:37:34.062111  452136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:37:34.160061  452136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.crt.dd3b5fb2 ...
	I0116 02:37:34.160096  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.crt.dd3b5fb2: {Name:mkc6cb12094050ac97ce2b48f43adc797132a2a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:34.160247  452136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.key.dd3b5fb2 ...
	I0116 02:37:34.160277  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.key.dd3b5fb2: {Name:mk6aa2d5c2fe008d15f1b74ff01d5ddb6e1d288a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:34.160374  452136 certs.go:337] copying /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.crt
	I0116 02:37:34.160461  452136 certs.go:341] copying /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.key
	I0116 02:37:34.160511  452136 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/proxy-client.key
	I0116 02:37:34.160537  452136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/proxy-client.crt with IP's: []
	I0116 02:37:34.531742  452136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/proxy-client.crt ...
	I0116 02:37:34.531783  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/proxy-client.crt: {Name:mk34fc93dcc41a896d566b5507a0653c001e1ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:34.531979  452136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/proxy-client.key ...
	I0116 02:37:34.532001  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/proxy-client.key: {Name:mk437f9d706a98f12964bc3af7ab8e3de7871f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:34.532228  452136 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 02:37:34.532295  452136 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem (1078 bytes)
	I0116 02:37:34.532336  452136 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:37:34.532369  452136 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem (1675 bytes)
	I0116 02:37:34.533155  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:37:34.556386  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 02:37:34.578498  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:37:34.600696  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 02:37:34.622210  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:37:34.643134  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:37:34.664520  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:37:34.686041  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:37:34.707100  452136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:37:34.728716  452136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:37:34.744655  452136 ssh_runner.go:195] Run: openssl version
	I0116 02:37:34.749835  452136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:37:34.758669  452136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:37:34.761749  452136 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:37:34.761810  452136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:37:34.767948  452136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:37:34.776179  452136 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:37:34.778993  452136 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:37:34.779035  452136 kubeadm.go:404] StartCluster: {Name:addons-411655 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-411655 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:37:34.779123  452136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 02:37:34.779169  452136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:37:34.811488  452136 cri.go:89] found id: ""
	I0116 02:37:34.811549  452136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:37:34.819679  452136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:37:34.827639  452136 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 02:37:34.827702  452136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:37:34.835484  452136 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:37:34.835542  452136 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 02:37:34.877920  452136 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 02:37:34.878027  452136 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:37:34.912941  452136 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 02:37:34.913035  452136 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-gcp
	I0116 02:37:34.913086  452136 kubeadm.go:322] OS: Linux
	I0116 02:37:34.913158  452136 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 02:37:34.913203  452136 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 02:37:34.913289  452136 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 02:37:34.913368  452136 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 02:37:34.913445  452136 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 02:37:34.913524  452136 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 02:37:34.913598  452136 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0116 02:37:34.913671  452136 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0116 02:37:34.913720  452136 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0116 02:37:34.972626  452136 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:37:34.972725  452136 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:37:34.972809  452136 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:37:35.164713  452136 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:37:35.167538  452136 out.go:204]   - Generating certificates and keys ...
	I0116 02:37:35.167620  452136 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:37:35.167679  452136 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:37:35.472332  452136 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:37:35.706082  452136 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:37:35.798694  452136 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:37:35.860648  452136 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:37:36.213070  452136 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:37:36.213358  452136 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-411655 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 02:37:36.343156  452136 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:37:36.343286  452136 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-411655 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 02:37:36.410559  452136 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:37:36.560291  452136 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:37:36.841963  452136 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:37:36.842050  452136 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:37:36.992847  452136 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:37:37.096556  452136 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:37:37.192115  452136 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:37:37.324687  452136 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:37:37.325129  452136 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:37:37.327323  452136 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:37:37.329723  452136 out.go:204]   - Booting up control plane ...
	I0116 02:37:37.329812  452136 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:37:37.329875  452136 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:37:37.330659  452136 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:37:37.338854  452136 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:37:37.339607  452136 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:37:37.339691  452136 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:37:37.414128  452136 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:37:42.416150  452136 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002094 seconds
	I0116 02:37:42.416305  452136 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:37:42.427060  452136 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:37:42.944114  452136 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:37:42.944332  452136 kubeadm.go:322] [mark-control-plane] Marking the node addons-411655 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:37:43.454502  452136 kubeadm.go:322] [bootstrap-token] Using token: rsgddo.1yay4c5zxu3sfgca
	I0116 02:37:43.455829  452136 out.go:204]   - Configuring RBAC rules ...
	I0116 02:37:43.455971  452136 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:37:43.460234  452136 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:37:43.466934  452136 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:37:43.469446  452136 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:37:43.471976  452136 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:37:43.474526  452136 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:37:43.483705  452136 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:37:43.624013  452136 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:37:43.864222  452136 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:37:43.865020  452136 kubeadm.go:322] 
	I0116 02:37:43.865086  452136 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:37:43.865093  452136 kubeadm.go:322] 
	I0116 02:37:43.865199  452136 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:37:43.865220  452136 kubeadm.go:322] 
	I0116 02:37:43.865253  452136 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:37:43.865358  452136 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:37:43.865443  452136 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:37:43.865454  452136 kubeadm.go:322] 
	I0116 02:37:43.865520  452136 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 02:37:43.865532  452136 kubeadm.go:322] 
	I0116 02:37:43.865568  452136 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:37:43.865576  452136 kubeadm.go:322] 
	I0116 02:37:43.865614  452136 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:37:43.865681  452136 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:37:43.865743  452136 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:37:43.865753  452136 kubeadm.go:322] 
	I0116 02:37:43.865866  452136 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:37:43.865970  452136 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:37:43.865989  452136 kubeadm.go:322] 
	I0116 02:37:43.866076  452136 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rsgddo.1yay4c5zxu3sfgca \
	I0116 02:37:43.866166  452136 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8cf2f52e6e786139868a71d0da6c4e60f90166b48a1f8c1755e09d650797d85a \
	I0116 02:37:43.866186  452136 kubeadm.go:322] 	--control-plane 
	I0116 02:37:43.866190  452136 kubeadm.go:322] 
	I0116 02:37:43.866314  452136 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:37:43.866326  452136 kubeadm.go:322] 
	I0116 02:37:43.866425  452136 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rsgddo.1yay4c5zxu3sfgca \
	I0116 02:37:43.866582  452136 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8cf2f52e6e786139868a71d0da6c4e60f90166b48a1f8c1755e09d650797d85a 
	I0116 02:37:43.868438  452136 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-gcp\n", err: exit status 1
	I0116 02:37:43.868537  452136 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:37:43.868570  452136 cni.go:84] Creating CNI manager for ""
	I0116 02:37:43.868579  452136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 02:37:43.870366  452136 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 02:37:43.871748  452136 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:37:43.875348  452136 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:37:43.875365  452136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:37:43.891243  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:37:44.531157  452136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:37:44.531243  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:44.531255  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=addons-411655 minikube.k8s.io/updated_at=2024_01_16T02_37_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:44.538298  452136 ops.go:34] apiserver oom_adj: -16
	I0116 02:37:44.623771  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:45.124204  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:45.623877  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:46.124514  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:46.624719  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:47.123806  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:47.624641  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:48.123835  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:48.624550  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:49.123829  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:49.623986  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:50.124613  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:50.623887  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:51.124038  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:51.624013  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:52.123817  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:52.624035  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:53.124856  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:53.623979  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:54.123855  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:54.623851  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:55.124604  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:55.624514  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:56.124005  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:56.624756  452136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:56.688229  452136 kubeadm.go:1088] duration metric: took 12.157050895s to wait for elevateKubeSystemPrivileges.
	I0116 02:37:56.688286  452136 kubeadm.go:406] StartCluster complete in 21.909254819s
	I0116 02:37:56.688313  452136 settings.go:142] acquiring lock: {Name:mk9828dcd1e8ccfccc84768ea3ab177cb7be8afc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:56.688431  452136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:37:56.688804  452136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/kubeconfig: {Name:mka24a12b8e1d963a345dadb59b1cdf4f4debade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:56.689002  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:37:56.689174  452136 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0116 02:37:56.689260  452136 config.go:182] Loaded profile config "addons-411655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:37:56.689286  452136 addons.go:69] Setting yakd=true in profile "addons-411655"
	I0116 02:37:56.689307  452136 addons.go:69] Setting ingress-dns=true in profile "addons-411655"
	I0116 02:37:56.689312  452136 addons.go:234] Setting addon yakd=true in "addons-411655"
	I0116 02:37:56.689322  452136 addons.go:234] Setting addon ingress-dns=true in "addons-411655"
	I0116 02:37:56.689362  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.689379  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.689387  452136 addons.go:69] Setting cloud-spanner=true in profile "addons-411655"
	I0116 02:37:56.689405  452136 addons.go:234] Setting addon cloud-spanner=true in "addons-411655"
	I0116 02:37:56.689459  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.689774  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.689841  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.689372  452136 addons.go:69] Setting default-storageclass=true in profile "addons-411655"
	I0116 02:37:56.689870  452136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-411655"
	I0116 02:37:56.690037  452136 addons.go:69] Setting helm-tiller=true in profile "addons-411655"
	I0116 02:37:56.690059  452136 addons.go:69] Setting gcp-auth=true in profile "addons-411655"
	I0116 02:37:56.690070  452136 addons.go:234] Setting addon helm-tiller=true in "addons-411655"
	I0116 02:37:56.690089  452136 mustload.go:65] Loading cluster: addons-411655
	I0116 02:37:56.690089  452136 addons.go:69] Setting registry=true in profile "addons-411655"
	I0116 02:37:56.690105  452136 addons.go:234] Setting addon registry=true in "addons-411655"
	I0116 02:37:56.690118  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.690139  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.690156  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.690285  452136 config.go:182] Loaded profile config "addons-411655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:37:56.690520  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.690562  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.690607  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.689849  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.690892  452136 addons.go:69] Setting ingress=true in profile "addons-411655"
	I0116 02:37:56.690922  452136 addons.go:234] Setting addon ingress=true in "addons-411655"
	I0116 02:37:56.690930  452136 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-411655"
	I0116 02:37:56.690983  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.690975  452136 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-411655"
	I0116 02:37:56.691025  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.691439  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.691462  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.692183  452136 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-411655"
	I0116 02:37:56.692376  452136 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-411655"
	I0116 02:37:56.692502  452136 addons.go:69] Setting metrics-server=true in profile "addons-411655"
	I0116 02:37:56.692529  452136 addons.go:234] Setting addon metrics-server=true in "addons-411655"
	I0116 02:37:56.692569  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.692609  452136 addons.go:69] Setting storage-provisioner=true in profile "addons-411655"
	I0116 02:37:56.694810  452136 addons.go:234] Setting addon storage-provisioner=true in "addons-411655"
	I0116 02:37:56.693012  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.693020  452136 addons.go:69] Setting inspektor-gadget=true in profile "addons-411655"
	I0116 02:37:56.693097  452136 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-411655"
	I0116 02:37:56.692484  452136 addons.go:69] Setting volumesnapshots=true in profile "addons-411655"
	I0116 02:37:56.694947  452136 addons.go:234] Setting addon inspektor-gadget=true in "addons-411655"
	I0116 02:37:56.695036  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.694951  452136 addons.go:234] Setting addon volumesnapshots=true in "addons-411655"
	I0116 02:37:56.695208  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.694928  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.695507  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.695735  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.695787  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.694964  452136 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-411655"
	I0116 02:37:56.699919  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.700424  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.696383  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.739400  452136 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0116 02:37:56.735849  452136 addons.go:234] Setting addon default-storageclass=true in "addons-411655"
	I0116 02:37:56.748867  452136 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0116 02:37:56.748912  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.754111  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.755072  452136 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0116 02:37:56.755078  452136 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0116 02:37:56.755082  452136 out.go:177]   - Using image docker.io/registry:2.8.3
	I0116 02:37:56.755099  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0116 02:37:56.755633  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.758566  452136 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0116 02:37:56.756713  452136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0116 02:37:56.756791  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.760127  452136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0116 02:37:56.761071  452136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0116 02:37:56.764670  452136 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0116 02:37:56.765055  452136 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0116 02:37:56.768568  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0116 02:37:56.774160  452136 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0116 02:37:56.776226  452136 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-411655"
	I0116 02:37:56.776345  452136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:37:56.776617  452136 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0116 02:37:56.776634  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0116 02:37:56.776642  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0116 02:37:56.780770  452136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:37:56.778214  452136 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0116 02:37:56.778278  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.779641  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.778182  452136 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0116 02:37:56.779640  452136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0116 02:37:56.779658  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:37:56.779685  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.783373  452136 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:37:56.783406  452136 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 02:37:56.783433  452136 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0116 02:37:56.783996  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:37:56.784740  452136 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:37:56.784757  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0116 02:37:56.784809  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.786967  452136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0116 02:37:56.785069  452136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:37:56.785163  452136 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0116 02:37:56.785177  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 02:37:56.785184  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:37:56.785205  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0116 02:37:56.789646  452136 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0116 02:37:56.788310  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0116 02:37:56.788417  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.788446  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.788471  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.788840  452136 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:37:56.791011  452136 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:37:56.791068  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.792017  452136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0116 02:37:56.793260  452136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0116 02:37:56.792350  452136 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:37:56.792437  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0116 02:37:56.792450  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:37:56.794481  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.794475  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.795733  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0116 02:37:56.795796  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.797163  452136 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0116 02:37:56.796058  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.800270  452136 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0116 02:37:56.808439  452136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0116 02:37:56.810195  452136 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0116 02:37:56.810216  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0116 02:37:56.810277  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.818788  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.829738  452136 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0116 02:37:56.829526  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.831141  452136 out.go:177]   - Using image docker.io/busybox:stable
	I0116 02:37:56.832738  452136 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:37:56.832761  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0116 02:37:56.832814  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:37:56.833834  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.856411  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.858208  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.861490  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.863663  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.865566  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.866210  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.869286  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.870396  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.870740  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:37:56.877977  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	W0116 02:37:56.908481  452136 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0116 02:37:56.908544  452136 retry.go:31] will retry after 149.04936ms: ssh: handshake failed: EOF
	I0116 02:37:57.107906  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:37:57.203687  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0116 02:37:57.208157  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:37:57.214020  452136 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0116 02:37:57.214110  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0116 02:37:57.313654  452136 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0116 02:37:57.313752  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0116 02:37:57.314572  452136 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-411655" context rescaled to 1 replicas
	I0116 02:37:57.314681  452136 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:37:57.317999  452136 out.go:177] * Verifying Kubernetes components...
	I0116 02:37:57.319485  452136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:37:57.402594  452136 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0116 02:37:57.402681  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0116 02:37:57.406220  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:37:57.409555  452136 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0116 02:37:57.409627  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0116 02:37:57.409810  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:37:57.420776  452136 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 02:37:57.420859  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0116 02:37:57.426182  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:37:57.511543  452136 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0116 02:37:57.511644  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0116 02:37:57.520963  452136 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0116 02:37:57.520994  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0116 02:37:57.523812  452136 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0116 02:37:57.523849  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0116 02:37:57.604596  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:37:57.622453  452136 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:37:57.622498  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0116 02:37:57.706683  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:37:57.708066  452136 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0116 02:37:57.708089  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0116 02:37:57.717862  452136 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0116 02:37:57.717899  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0116 02:37:57.721554  452136 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0116 02:37:57.721585  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0116 02:37:57.818508  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 02:37:57.901950  452136 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0116 02:37:57.902080  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0116 02:37:57.910234  452136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 02:37:57.910312  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0116 02:37:58.006196  452136 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0116 02:37:58.006289  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0116 02:37:58.011370  452136 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0116 02:37:58.011413  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0116 02:37:58.015751  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:37:58.103317  452136 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0116 02:37:58.103416  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0116 02:37:58.214578  452136 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:37:58.214685  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0116 02:37:58.220670  452136 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0116 02:37:58.220757  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0116 02:37:58.222982  452136 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0116 02:37:58.223048  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0116 02:37:58.322383  452136 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0116 02:37:58.322490  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0116 02:37:58.408993  452136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 02:37:58.409092  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 02:37:58.423358  452136 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0116 02:37:58.423391  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0116 02:37:58.702078  452136 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0116 02:37:58.702126  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0116 02:37:58.707531  452136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:37:58.707614  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 02:37:58.713272  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:37:58.714105  452136 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:37:58.714166  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0116 02:37:58.816438  452136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0116 02:37:58.816472  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0116 02:37:58.914701  452136 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.806678131s)
	I0116 02:37:58.914881  452136 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0116 02:37:58.920050  452136 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0116 02:37:58.920128  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0116 02:37:59.114466  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:37:59.202140  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:37:59.203019  452136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0116 02:37:59.203049  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0116 02:37:59.221763  452136 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:37:59.221852  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0116 02:37:59.602750  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:37:59.605356  452136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0116 02:37:59.605396  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0116 02:37:59.915069  452136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0116 02:37:59.915103  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0116 02:38:00.300974  452136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:38:00.301009  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0116 02:38:00.817699  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:38:00.906807  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.703076187s)
	I0116 02:38:03.608834  452136 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0116 02:38:03.608922  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:38:03.614559  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.406344033s)
	I0116 02:38:03.614598  452136 addons.go:470] Verifying addon ingress=true in "addons-411655"
	I0116 02:38:03.614628  452136 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (6.295104608s)
	I0116 02:38:03.617355  452136 out.go:177] * Verifying ingress addon...
	I0116 02:38:03.614688  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.20843113s)
	I0116 02:38:03.614744  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.204897865s)
	I0116 02:38:03.614787  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.188534482s)
	I0116 02:38:03.614836  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.010190849s)
	I0116 02:38:03.614884  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.908156796s)
	I0116 02:38:03.614914  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.796300064s)
	I0116 02:38:03.614948  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.599169379s)
	I0116 02:38:03.614979  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.901629702s)
	I0116 02:38:03.615025  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.500484331s)
	I0116 02:38:03.615136  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.412957124s)
	I0116 02:38:03.615182  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.012337281s)
	I0116 02:38:03.615788  452136 node_ready.go:35] waiting up to 6m0s for node "addons-411655" to be "Ready" ...
	I0116 02:38:03.619873  452136 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0116 02:38:03.621829  452136 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-411655 service yakd-dashboard -n yakd-dashboard
	
	I0116 02:38:03.620070  452136 addons.go:470] Verifying addon registry=true in "addons-411655"
	I0116 02:38:03.620081  452136 addons.go:470] Verifying addon metrics-server=true in "addons-411655"
	W0116 02:38:03.620109  452136 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:38:03.623661  452136 retry.go:31] will retry after 237.823564ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:38:03.625319  452136 out.go:177] * Verifying registry addon...
	I0116 02:38:03.626504  452136 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0116 02:38:03.626697  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:03.627531  452136 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0116 02:38:03.634363  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	W0116 02:38:03.634765  452136 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0116 02:38:03.635256  452136 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 02:38:03.635312  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:03.807793  452136 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0116 02:38:03.825683  452136 addons.go:234] Setting addon gcp-auth=true in "addons-411655"
	I0116 02:38:03.825747  452136 host.go:66] Checking if "addons-411655" exists ...
	I0116 02:38:03.826214  452136 cli_runner.go:164] Run: docker container inspect addons-411655 --format={{.State.Status}}
	I0116 02:38:03.846355  452136 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0116 02:38:03.846447  452136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-411655
	I0116 02:38:03.861807  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:38:03.865094  452136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/addons-411655/id_rsa Username:docker}
	I0116 02:38:04.124321  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:04.134086  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:04.507121  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.689363025s)
	I0116 02:38:04.507164  452136 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-411655"
	I0116 02:38:04.508790  452136 out.go:177] * Verifying csi-hostpath-driver addon...
	I0116 02:38:04.511220  452136 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0116 02:38:04.514582  452136 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 02:38:04.514601  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:04.624357  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:04.631710  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:04.896749  452136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.034896239s)
	I0116 02:38:04.896815  452136 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.050412967s)
	I0116 02:38:04.898719  452136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:38:04.900800  452136 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0116 02:38:04.902308  452136 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0116 02:38:04.902329  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0116 02:38:04.918820  452136 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0116 02:38:04.918845  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0116 02:38:04.934821  452136 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:38:04.934845  452136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0116 02:38:04.951158  452136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:38:05.016448  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:05.124579  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:05.131765  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:05.431760  452136 addons.go:470] Verifying addon gcp-auth=true in "addons-411655"
	I0116 02:38:05.433478  452136 out.go:177] * Verifying gcp-auth addon...
	I0116 02:38:05.435230  452136 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0116 02:38:05.437790  452136 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0116 02:38:05.437807  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:05.516481  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:05.624806  452136 node_ready.go:58] node "addons-411655" has status "Ready":"False"
	I0116 02:38:05.625896  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:05.631420  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:05.939961  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:06.016690  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:06.124107  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:06.134362  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:06.439170  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:06.516748  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:06.624419  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:06.631866  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:06.939836  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:07.017299  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:07.125477  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:07.132510  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:07.438737  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:07.515454  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:07.624768  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:07.631771  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:07.939200  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:08.016144  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:08.123496  452136 node_ready.go:58] node "addons-411655" has status "Ready":"False"
	I0116 02:38:08.124865  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:08.131816  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:08.439387  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:08.516615  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:08.623591  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:08.631987  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:08.938973  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:09.015465  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:09.123624  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:09.130964  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:09.439242  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:09.515929  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:09.623968  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:09.631644  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:09.939102  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:10.015708  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:10.123728  452136 node_ready.go:58] node "addons-411655" has status "Ready":"False"
	I0116 02:38:10.124052  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:10.131658  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:10.438806  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:10.515404  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:10.624509  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:10.631036  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:10.939421  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:11.016009  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:11.124521  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:11.131328  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:11.438438  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:11.516553  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:11.625269  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:11.631755  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:11.938776  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:12.015707  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:12.123711  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:12.131242  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:12.438578  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:12.515595  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:12.623370  452136 node_ready.go:58] node "addons-411655" has status "Ready":"False"
	I0116 02:38:12.623882  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:12.631213  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:12.938436  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:13.015801  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:13.123945  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:13.131524  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:13.438991  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:13.515638  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:13.624158  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:13.631244  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:13.938664  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:14.015302  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:14.124638  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:14.130966  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:14.439130  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:14.515736  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:14.623765  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:14.631246  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:14.938212  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:15.015738  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:15.123466  452136 node_ready.go:58] node "addons-411655" has status "Ready":"False"
	I0116 02:38:15.124043  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:15.131243  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:15.438495  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:15.515084  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:15.624214  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:15.630854  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:15.939409  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:16.015984  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:16.124219  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:16.131729  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:16.438598  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:16.515352  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:16.624456  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:16.630773  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:16.939097  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:17.015819  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:17.123832  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:17.131441  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:17.438739  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:17.515482  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:17.624037  452136 node_ready.go:58] node "addons-411655" has status "Ready":"False"
	I0116 02:38:17.624891  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:17.631251  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:17.938141  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:18.015768  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:18.124010  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:18.131487  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:18.438168  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:18.515760  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:18.623757  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:18.631452  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:18.938594  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:19.015441  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:19.124727  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:19.130754  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:19.438851  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:19.515860  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:19.624071  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:19.631595  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:19.938563  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:20.015092  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:20.123915  452136 node_ready.go:58] node "addons-411655" has status "Ready":"False"
	I0116 02:38:20.124387  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:20.130431  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:20.438785  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:20.515306  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:20.625185  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:20.631705  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:20.939323  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:21.016110  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:21.124397  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:21.131126  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:21.439318  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:21.515826  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:21.623977  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:21.631526  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:21.938453  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:22.015947  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:22.124173  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:22.130700  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:22.438943  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:22.515620  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:22.623706  452136 node_ready.go:58] node "addons-411655" has status "Ready":"False"
	I0116 02:38:22.623885  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:22.631354  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:22.938737  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:23.015340  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:23.124372  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:23.131107  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:23.439222  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:23.515972  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:23.623931  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:23.631774  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:23.939296  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:24.015755  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:24.124226  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:24.131384  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:24.438588  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:24.515242  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:24.624102  452136 node_ready.go:58] node "addons-411655" has status "Ready":"False"
	I0116 02:38:24.624542  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:24.631006  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:24.939183  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:25.015907  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:25.123842  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:25.133322  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:25.438424  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:25.516241  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:25.624359  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:25.631667  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:25.939165  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:26.016024  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:26.124326  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:26.132122  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:26.439204  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:26.515808  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:26.624017  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:26.631903  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:26.939399  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:27.016176  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:27.124024  452136 node_ready.go:58] node "addons-411655" has status "Ready":"False"
	I0116 02:38:27.124430  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:27.130962  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:27.439178  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:27.515618  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:27.624533  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:27.630762  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:27.939056  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:28.015688  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:28.124112  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:28.131802  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:28.438966  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:28.603011  452136 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 02:38:28.603097  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:28.623460  452136 node_ready.go:49] node "addons-411655" has status "Ready":"True"
	I0116 02:38:28.623496  452136 node_ready.go:38] duration metric: took 25.003332896s waiting for node "addons-411655" to be "Ready" ...
	I0116 02:38:28.623518  452136 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:38:28.624600  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:28.633063  452136 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 02:38:28.633089  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:28.634232  452136 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g2rlh" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:28.939499  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:29.017261  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:29.130562  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:29.134550  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:29.439069  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:29.516695  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:29.624637  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:29.632093  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:29.940481  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:30.017347  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:30.124156  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:30.133352  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:30.141061  452136 pod_ready.go:92] pod "coredns-5dd5756b68-g2rlh" in "kube-system" namespace has status "Ready":"True"
	I0116 02:38:30.141085  452136 pod_ready.go:81] duration metric: took 1.506828516s waiting for pod "coredns-5dd5756b68-g2rlh" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.141111  452136 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-411655" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.145714  452136 pod_ready.go:92] pod "etcd-addons-411655" in "kube-system" namespace has status "Ready":"True"
	I0116 02:38:30.145737  452136 pod_ready.go:81] duration metric: took 4.613441ms waiting for pod "etcd-addons-411655" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.145750  452136 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-411655" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.150847  452136 pod_ready.go:92] pod "kube-apiserver-addons-411655" in "kube-system" namespace has status "Ready":"True"
	I0116 02:38:30.150874  452136 pod_ready.go:81] duration metric: took 5.117555ms waiting for pod "kube-apiserver-addons-411655" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.150886  452136 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-411655" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.155959  452136 pod_ready.go:92] pod "kube-controller-manager-addons-411655" in "kube-system" namespace has status "Ready":"True"
	I0116 02:38:30.155985  452136 pod_ready.go:81] duration metric: took 5.090408ms waiting for pod "kube-controller-manager-addons-411655" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.156001  452136 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hnr6q" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.224587  452136 pod_ready.go:92] pod "kube-proxy-hnr6q" in "kube-system" namespace has status "Ready":"True"
	I0116 02:38:30.224616  452136 pod_ready.go:81] duration metric: took 68.605799ms waiting for pod "kube-proxy-hnr6q" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.224630  452136 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-411655" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.439272  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:30.517927  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:30.624100  452136 pod_ready.go:92] pod "kube-scheduler-addons-411655" in "kube-system" namespace has status "Ready":"True"
	I0116 02:38:30.624135  452136 pod_ready.go:81] duration metric: took 399.496371ms waiting for pod "kube-scheduler-addons-411655" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.624149  452136 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace to be "Ready" ...
	I0116 02:38:30.624906  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:30.632518  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:30.939053  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:31.017310  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:31.124537  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:31.132103  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:31.438746  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:31.516518  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:31.624613  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:31.630943  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:31.939201  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:32.017127  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:32.128865  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:32.131869  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:32.439287  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:32.519204  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:32.625154  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:32.629500  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:32.632614  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:32.938044  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:33.016696  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:33.124975  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:33.131364  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:33.439501  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:33.517238  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:33.624816  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:33.631469  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:33.939275  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:34.016773  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:34.124993  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:34.131200  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:34.439729  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:34.516317  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:34.624644  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:34.630723  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:34.632045  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:34.939436  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:35.017073  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:35.124754  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:35.131615  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:35.438509  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:35.517174  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:35.624023  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:35.631101  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:35.938674  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:36.016705  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:36.124152  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:36.131744  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:36.438740  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:36.516135  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:36.623961  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:36.632030  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:36.939285  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:37.016683  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:37.124438  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:37.129355  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:37.134355  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:37.439480  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:37.516712  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:37.624557  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:37.631452  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:37.939290  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:38.017558  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:38.124128  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:38.132518  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:38.439177  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:38.517702  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:38.624481  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:38.631292  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:38.939445  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:39.017511  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:39.124496  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:39.129913  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:39.131791  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:39.439454  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:39.517398  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:39.627054  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:39.631746  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:39.988831  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:40.089260  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:40.126031  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:40.132984  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:40.439378  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:40.517578  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:40.624571  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:40.632269  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:40.939822  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:41.015953  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:41.124869  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:41.130730  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:41.131963  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:41.439048  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:41.516708  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:41.625002  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:41.631584  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:41.938589  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:42.016415  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:42.129086  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:42.131915  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:42.438787  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:42.516847  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:42.625900  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:42.631871  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:43.002708  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:43.022721  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:43.204838  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:43.205428  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:43.212427  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:43.439593  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:43.521272  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:43.626087  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:43.632371  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:44.003772  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:44.016988  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:44.125116  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:44.131514  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:44.439339  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:44.517768  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:44.625127  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:44.633378  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:44.940420  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:45.018733  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:45.162875  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:45.163075  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:45.439370  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:45.518065  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:45.624896  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:45.630811  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:45.632381  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:45.939518  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:46.017946  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:46.125486  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:46.132809  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:46.439247  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:46.517925  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:46.624732  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:46.631936  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:46.939361  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:47.017432  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:47.125135  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:47.132531  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:47.438577  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:47.516345  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:47.624833  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:47.631578  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:47.938552  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:48.017149  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:48.125370  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:48.128815  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:48.131587  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:48.438617  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:48.516342  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:48.624447  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:48.639454  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:48.939224  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:49.017118  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:49.125630  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:49.132734  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:49.438907  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:49.516904  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:49.625417  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:49.631865  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:49.939320  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:50.017208  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:50.124904  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:50.130302  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:50.131483  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:50.439324  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:50.517472  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:50.624580  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:50.631519  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:50.939036  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:51.017645  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:51.126006  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:51.132238  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:51.439317  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:51.518251  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:51.625103  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:51.631874  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:51.939341  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:52.016951  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:52.124662  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:52.131312  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:52.439373  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:52.516723  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:52.625075  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:52.628900  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:52.631663  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:52.938709  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:53.016664  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:53.124148  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:53.132276  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:53.439398  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:53.516933  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:53.625041  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:53.632886  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:53.938401  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:54.018813  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:54.124664  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:54.132027  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:54.439101  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:54.516935  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:54.624582  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:54.629220  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:54.631198  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:54.940040  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:55.017254  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:55.126146  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:55.132478  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:55.502282  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:55.517650  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:55.626036  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:55.633108  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:55.939916  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:56.017531  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:56.124286  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:56.133738  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:56.439621  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:56.516986  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:56.624723  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:56.629983  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:56.631926  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:56.939541  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:57.017667  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:57.125856  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:57.131741  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:57.438592  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:57.516992  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:57.624633  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:57.632139  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:57.938808  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:58.016546  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:58.124547  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:58.131845  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:58.438906  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:58.516599  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:58.624240  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:58.631660  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:58.938623  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:59.017797  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:59.128744  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:59.132227  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:38:59.132712  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:59.438970  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:38:59.517095  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:38:59.624746  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:38:59.631756  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:38:59.938593  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:00.016059  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:00.125385  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:00.132025  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:00.439635  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:00.517900  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:00.625208  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:00.631967  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:00.939747  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:01.016538  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:01.125462  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:01.132870  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:39:01.134641  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:01.438841  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:01.517314  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:01.624922  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:01.631591  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:01.939480  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:02.017195  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:02.125047  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:02.132657  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:02.438435  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:02.517015  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:02.624627  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:02.631719  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:02.938488  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:03.016864  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:03.125297  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:03.131082  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:03.439212  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:03.516715  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:03.624621  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:03.629329  452136 pod_ready.go:102] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"False"
	I0116 02:39:03.631175  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:03.939004  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:04.016787  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:04.124640  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:04.131853  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:04.438835  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:04.516617  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:04.624870  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:04.629789  452136 pod_ready.go:92] pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace has status "Ready":"True"
	I0116 02:39:04.629824  452136 pod_ready.go:81] duration metric: took 34.005664849s waiting for pod "metrics-server-7c66d45ddc-m8lg8" in "kube-system" namespace to be "Ready" ...
	I0116 02:39:04.629839  452136 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tr95k" in "kube-system" namespace to be "Ready" ...
	I0116 02:39:04.631710  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:04.939453  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:05.016836  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:05.124991  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:05.132730  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:05.439185  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:05.517048  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:05.624141  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:05.632503  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:05.939533  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:06.017394  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:06.124638  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:06.133200  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:06.438357  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:06.516716  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:06.624727  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:06.631625  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:06.634482  452136 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-tr95k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:39:06.938983  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:07.016627  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:07.124648  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:07.131869  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:07.439168  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:07.516603  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:07.624512  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:07.631684  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:08.004064  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:08.017567  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:08.125688  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:08.132531  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:08.503749  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:08.517702  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:08.625821  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:08.703706  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:08.706634  452136 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-tr95k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:39:09.005992  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:09.020027  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:09.127448  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:09.202988  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:09.439321  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:09.517488  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:09.626143  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:09.632974  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:09.940064  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:10.017743  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:10.127072  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:10.132661  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:10.439972  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:10.518033  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:10.626625  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:10.632612  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:10.938970  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:11.019247  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:11.125805  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:11.132314  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:11.135065  452136 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-tr95k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:39:11.439354  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:11.517189  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:11.624769  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:11.631693  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:11.939594  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:12.016806  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:12.125168  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:12.132639  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:12.439284  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:12.517156  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:12.625153  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:12.633909  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:12.938539  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:13.016003  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:13.124947  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:13.132324  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:13.438789  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:13.516355  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:13.624401  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:13.633522  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:13.635946  452136 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-tr95k" in "kube-system" namespace has status "Ready":"True"
	I0116 02:39:13.635973  452136 pod_ready.go:81] duration metric: took 9.006124784s waiting for pod "nvidia-device-plugin-daemonset-tr95k" in "kube-system" namespace to be "Ready" ...
	I0116 02:39:13.635992  452136 pod_ready.go:38] duration metric: took 45.012437376s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:39:13.636009  452136 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:39:13.636042  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 02:39:13.636101  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 02:39:13.670294  452136 cri.go:89] found id: "0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036"
	I0116 02:39:13.670319  452136 cri.go:89] found id: ""
	I0116 02:39:13.670327  452136 logs.go:284] 1 containers: [0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036]
	I0116 02:39:13.670380  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:13.673850  452136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 02:39:13.673907  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 02:39:13.706789  452136 cri.go:89] found id: "7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46"
	I0116 02:39:13.706821  452136 cri.go:89] found id: ""
	I0116 02:39:13.706832  452136 logs.go:284] 1 containers: [7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46]
	I0116 02:39:13.706882  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:13.710308  452136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 02:39:13.710375  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 02:39:13.745081  452136 cri.go:89] found id: "75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903"
	I0116 02:39:13.745108  452136 cri.go:89] found id: ""
	I0116 02:39:13.745116  452136 logs.go:284] 1 containers: [75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903]
	I0116 02:39:13.745160  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:13.748521  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 02:39:13.748591  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 02:39:13.781194  452136 cri.go:89] found id: "da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0"
	I0116 02:39:13.781223  452136 cri.go:89] found id: ""
	I0116 02:39:13.781231  452136 logs.go:284] 1 containers: [da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0]
	I0116 02:39:13.781273  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:13.784500  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 02:39:13.784569  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 02:39:13.816780  452136 cri.go:89] found id: "7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23"
	I0116 02:39:13.816804  452136 cri.go:89] found id: ""
	I0116 02:39:13.816814  452136 logs.go:284] 1 containers: [7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23]
	I0116 02:39:13.816870  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:13.820037  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 02:39:13.820094  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 02:39:13.852622  452136 cri.go:89] found id: "27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3"
	I0116 02:39:13.852655  452136 cri.go:89] found id: ""
	I0116 02:39:13.852665  452136 logs.go:284] 1 containers: [27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3]
	I0116 02:39:13.852718  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:13.856026  452136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 02:39:13.856078  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 02:39:13.888758  452136 cri.go:89] found id: "fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f"
	I0116 02:39:13.888789  452136 cri.go:89] found id: ""
	I0116 02:39:13.888799  452136 logs.go:284] 1 containers: [fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f]
	I0116 02:39:13.888843  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:13.892011  452136 logs.go:123] Gathering logs for coredns [75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903] ...
	I0116 02:39:13.892040  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903"
	I0116 02:39:13.923787  452136 logs.go:123] Gathering logs for kube-scheduler [da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0] ...
	I0116 02:39:13.923817  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0"
	I0116 02:39:13.939164  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:13.964543  452136 logs.go:123] Gathering logs for kube-proxy [7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23] ...
	I0116 02:39:13.964573  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23"
	I0116 02:39:13.998782  452136 logs.go:123] Gathering logs for kube-controller-manager [27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3] ...
	I0116 02:39:13.998809  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3"
	I0116 02:39:14.016940  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:14.056360  452136 logs.go:123] Gathering logs for kindnet [fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f] ...
	I0116 02:39:14.056408  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f"
	I0116 02:39:14.089346  452136 logs.go:123] Gathering logs for container status ...
	I0116 02:39:14.089378  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 02:39:14.124680  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:14.131647  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:14.133124  452136 logs.go:123] Gathering logs for describe nodes ...
	I0116 02:39:14.133150  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 02:39:14.233065  452136 logs.go:123] Gathering logs for kube-apiserver [0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036] ...
	I0116 02:39:14.233095  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036"
	I0116 02:39:14.278696  452136 logs.go:123] Gathering logs for etcd [7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46] ...
	I0116 02:39:14.278729  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46"
	I0116 02:39:14.322602  452136 logs.go:123] Gathering logs for CRI-O ...
	I0116 02:39:14.322633  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 02:39:14.399596  452136 logs.go:123] Gathering logs for kubelet ...
	I0116 02:39:14.399632  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 02:39:14.439107  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:14.473293  452136 logs.go:123] Gathering logs for dmesg ...
	I0116 02:39:14.473333  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 02:39:14.517012  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:14.624944  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:14.632160  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:14.939477  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:15.017052  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:15.126504  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:15.131874  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:15.439039  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:15.517691  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:15.625158  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:15.632847  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:15.939896  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:16.016971  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:16.125179  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:16.135500  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:16.438867  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:16.517619  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:16.624800  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:16.631807  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:16.939775  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:17.000505  452136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:39:17.014058  452136 api_server.go:72] duration metric: took 1m19.699323192s to wait for apiserver process to appear ...
	I0116 02:39:17.014088  452136 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:39:17.014170  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 02:39:17.014233  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 02:39:17.017856  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:17.049502  452136 cri.go:89] found id: "0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036"
	I0116 02:39:17.049530  452136 cri.go:89] found id: ""
	I0116 02:39:17.049540  452136 logs.go:284] 1 containers: [0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036]
	I0116 02:39:17.049595  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:17.053277  452136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 02:39:17.053345  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 02:39:17.092132  452136 cri.go:89] found id: "7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46"
	I0116 02:39:17.092178  452136 cri.go:89] found id: ""
	I0116 02:39:17.092190  452136 logs.go:284] 1 containers: [7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46]
	I0116 02:39:17.092251  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:17.096365  452136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 02:39:17.096476  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 02:39:17.125373  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:17.133253  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:17.143854  452136 cri.go:89] found id: "75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903"
	I0116 02:39:17.143875  452136 cri.go:89] found id: ""
	I0116 02:39:17.143885  452136 logs.go:284] 1 containers: [75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903]
	I0116 02:39:17.143975  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:17.150975  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 02:39:17.151049  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 02:39:17.233534  452136 cri.go:89] found id: "da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0"
	I0116 02:39:17.233554  452136 cri.go:89] found id: ""
	I0116 02:39:17.233562  452136 logs.go:284] 1 containers: [da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0]
	I0116 02:39:17.233622  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:17.237148  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 02:39:17.237220  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 02:39:17.308075  452136 cri.go:89] found id: "7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23"
	I0116 02:39:17.308102  452136 cri.go:89] found id: ""
	I0116 02:39:17.308114  452136 logs.go:284] 1 containers: [7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23]
	I0116 02:39:17.308163  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:17.311666  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 02:39:17.311739  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 02:39:17.349968  452136 cri.go:89] found id: "27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3"
	I0116 02:39:17.349997  452136 cri.go:89] found id: ""
	I0116 02:39:17.350007  452136 logs.go:284] 1 containers: [27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3]
	I0116 02:39:17.350068  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:17.353566  452136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 02:39:17.353639  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 02:39:17.427346  452136 cri.go:89] found id: "fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f"
	I0116 02:39:17.427381  452136 cri.go:89] found id: ""
	I0116 02:39:17.427392  452136 logs.go:284] 1 containers: [fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f]
	I0116 02:39:17.427525  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:17.430823  452136 logs.go:123] Gathering logs for etcd [7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46] ...
	I0116 02:39:17.430848  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46"
	I0116 02:39:17.439064  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:17.517169  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:17.520076  452136 logs.go:123] Gathering logs for kube-controller-manager [27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3] ...
	I0116 02:39:17.520105  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3"
	I0116 02:39:17.581920  452136 logs.go:123] Gathering logs for CRI-O ...
	I0116 02:39:17.581954  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 02:39:17.624695  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:17.631932  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:17.683226  452136 logs.go:123] Gathering logs for container status ...
	I0116 02:39:17.683264  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 02:39:17.809125  452136 logs.go:123] Gathering logs for kubelet ...
	I0116 02:39:17.809156  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 02:39:17.893909  452136 logs.go:123] Gathering logs for describe nodes ...
	I0116 02:39:17.893947  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 02:39:17.939256  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:18.017574  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:18.041054  452136 logs.go:123] Gathering logs for coredns [75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903] ...
	I0116 02:39:18.041090  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903"
	I0116 02:39:18.074713  452136 logs.go:123] Gathering logs for kube-scheduler [da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0] ...
	I0116 02:39:18.074745  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0"
	I0116 02:39:18.125177  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:18.133063  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:18.146138  452136 logs.go:123] Gathering logs for kube-proxy [7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23] ...
	I0116 02:39:18.146181  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23"
	I0116 02:39:18.207053  452136 logs.go:123] Gathering logs for kindnet [fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f] ...
	I0116 02:39:18.207092  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f"
	I0116 02:39:18.241291  452136 logs.go:123] Gathering logs for dmesg ...
	I0116 02:39:18.241331  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 02:39:18.267694  452136 logs.go:123] Gathering logs for kube-apiserver [0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036] ...
	I0116 02:39:18.267740  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036"
	I0116 02:39:18.439189  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:18.516813  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:18.624559  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:18.631512  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:18.938555  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:19.018408  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:19.125194  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:19.133340  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:39:19.439668  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:19.522268  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:19.624938  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:19.632336  452136 kapi.go:107] duration metric: took 1m16.004800136s to wait for kubernetes.io/minikube-addons=registry ...
	I0116 02:39:19.938835  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:20.016465  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:20.124601  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:20.439374  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:20.516721  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:20.625285  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:20.827697  452136 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0116 02:39:20.832938  452136 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0116 02:39:20.834011  452136 api_server.go:141] control plane version: v1.28.4
	I0116 02:39:20.834040  452136 api_server.go:131] duration metric: took 3.819943728s to wait for apiserver health ...
	I0116 02:39:20.834051  452136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:39:20.834078  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 02:39:20.834137  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 02:39:20.869023  452136 cri.go:89] found id: "0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036"
	I0116 02:39:20.869045  452136 cri.go:89] found id: ""
	I0116 02:39:20.869053  452136 logs.go:284] 1 containers: [0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036]
	I0116 02:39:20.869096  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:20.872310  452136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 02:39:20.872377  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 02:39:20.904639  452136 cri.go:89] found id: "7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46"
	I0116 02:39:20.904667  452136 cri.go:89] found id: ""
	I0116 02:39:20.904677  452136 logs.go:284] 1 containers: [7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46]
	I0116 02:39:20.904738  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:20.908019  452136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 02:39:20.908081  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 02:39:20.939467  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:20.941760  452136 cri.go:89] found id: "75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903"
	I0116 02:39:20.941789  452136 cri.go:89] found id: ""
	I0116 02:39:20.941799  452136 logs.go:284] 1 containers: [75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903]
	I0116 02:39:20.941853  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:20.945300  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 02:39:20.945360  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 02:39:20.977901  452136 cri.go:89] found id: "da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0"
	I0116 02:39:20.977929  452136 cri.go:89] found id: ""
	I0116 02:39:20.977947  452136 logs.go:284] 1 containers: [da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0]
	I0116 02:39:20.977995  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:20.981311  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 02:39:20.981374  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 02:39:21.014008  452136 cri.go:89] found id: "7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23"
	I0116 02:39:21.014087  452136 cri.go:89] found id: ""
	I0116 02:39:21.014102  452136 logs.go:284] 1 containers: [7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23]
	I0116 02:39:21.014159  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:21.016698  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:21.017759  452136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 02:39:21.017816  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 02:39:21.051201  452136 cri.go:89] found id: "27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3"
	I0116 02:39:21.051223  452136 cri.go:89] found id: ""
	I0116 02:39:21.051230  452136 logs.go:284] 1 containers: [27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3]
	I0116 02:39:21.051272  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:21.054602  452136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 02:39:21.054666  452136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 02:39:21.086724  452136 cri.go:89] found id: "fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f"
	I0116 02:39:21.086753  452136 cri.go:89] found id: ""
	I0116 02:39:21.086765  452136 logs.go:284] 1 containers: [fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f]
	I0116 02:39:21.086815  452136 ssh_runner.go:195] Run: which crictl
	I0116 02:39:21.090182  452136 logs.go:123] Gathering logs for kindnet [fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f] ...
	I0116 02:39:21.090204  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f"
	I0116 02:39:21.122171  452136 logs.go:123] Gathering logs for container status ...
	I0116 02:39:21.122201  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 02:39:21.125323  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:21.164922  452136 logs.go:123] Gathering logs for kube-apiserver [0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036] ...
	I0116 02:39:21.164951  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036"
	I0116 02:39:21.208702  452136 logs.go:123] Gathering logs for kube-scheduler [da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0] ...
	I0116 02:39:21.208739  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0"
	I0116 02:39:21.247736  452136 logs.go:123] Gathering logs for kube-controller-manager [27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3] ...
	I0116 02:39:21.247772  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3"
	I0116 02:39:21.305510  452136 logs.go:123] Gathering logs for etcd [7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46] ...
	I0116 02:39:21.305544  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46"
	I0116 02:39:21.348961  452136 logs.go:123] Gathering logs for coredns [75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903] ...
	I0116 02:39:21.348994  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903"
	I0116 02:39:21.382518  452136 logs.go:123] Gathering logs for kube-proxy [7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23] ...
	I0116 02:39:21.382547  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23"
	I0116 02:39:21.417645  452136 logs.go:123] Gathering logs for CRI-O ...
	I0116 02:39:21.417673  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 02:39:21.438549  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:21.491872  452136 logs.go:123] Gathering logs for kubelet ...
	I0116 02:39:21.491910  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 02:39:21.517028  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:21.571337  452136 logs.go:123] Gathering logs for dmesg ...
	I0116 02:39:21.571375  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 02:39:21.595692  452136 logs.go:123] Gathering logs for describe nodes ...
	I0116 02:39:21.595726  452136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 02:39:21.624545  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:21.939278  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:22.016691  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:22.125049  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:22.439531  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:22.518075  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:22.625492  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:22.939493  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:39:23.017692  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:23.125283  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:23.508757  452136 kapi.go:107] duration metric: took 1m18.073524165s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0116 02:39:23.510674  452136 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-411655 cluster.
	I0116 02:39:23.512123  452136 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0116 02:39:23.513464  452136 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0116 02:39:23.518771  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:23.626163  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:24.022544  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:24.125462  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:24.309548  452136 system_pods.go:59] 19 kube-system pods found
	I0116 02:39:24.309647  452136 system_pods.go:61] "coredns-5dd5756b68-g2rlh" [a4fa8c1e-4b9b-4686-9466-7f5b2e73bff2] Running
	I0116 02:39:24.309666  452136 system_pods.go:61] "csi-hostpath-attacher-0" [1d1a9332-1b12-40a5-b2c1-ada8728e5fd8] Running
	I0116 02:39:24.309681  452136 system_pods.go:61] "csi-hostpath-resizer-0" [48a5f050-edcd-4b0e-b8e2-74eb87150914] Running
	I0116 02:39:24.309718  452136 system_pods.go:61] "csi-hostpathplugin-lrr68" [e6866fb7-bae1-4f32-a89f-e60c06d11935] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 02:39:24.309745  452136 system_pods.go:61] "etcd-addons-411655" [672836e6-a699-40e5-9860-20597a66dc3e] Running
	I0116 02:39:24.309763  452136 system_pods.go:61] "kindnet-vdzl4" [eca1f23a-afcd-477d-be9d-40862de5879a] Running
	I0116 02:39:24.309778  452136 system_pods.go:61] "kube-apiserver-addons-411655" [5a51ace6-2b20-470a-b3e6-c5931f603ef6] Running
	I0116 02:39:24.309793  452136 system_pods.go:61] "kube-controller-manager-addons-411655" [76191807-080b-4e6b-b539-67566843be15] Running
	I0116 02:39:24.309821  452136 system_pods.go:61] "kube-ingress-dns-minikube" [b464dc23-2aa6-4791-abfd-3fa458eb2f99] Running
	I0116 02:39:24.309842  452136 system_pods.go:61] "kube-proxy-hnr6q" [098413f3-79e2-4236-9b8d-ec6e78866e83] Running
	I0116 02:39:24.309858  452136 system_pods.go:61] "kube-scheduler-addons-411655" [6c26e5bb-15c6-48d7-9052-c76089623107] Running
	I0116 02:39:24.309873  452136 system_pods.go:61] "metrics-server-7c66d45ddc-m8lg8" [1bf5020c-76ff-4dcd-bff4-03851042ddaa] Running
	I0116 02:39:24.309888  452136 system_pods.go:61] "nvidia-device-plugin-daemonset-tr95k" [14e5b74a-4026-4787-94c2-6fa1eeb1e161] Running
	I0116 02:39:24.309902  452136 system_pods.go:61] "registry-96z6n" [a9935a40-1774-4d34-846a-3f21c4e26b94] Running
	I0116 02:39:24.309926  452136 system_pods.go:61] "registry-proxy-kkkjz" [8796e73d-33b4-46b2-b5e3-48bec46545b4] Running
	I0116 02:39:24.309949  452136 system_pods.go:61] "snapshot-controller-58dbcc7b99-7b46v" [c2043436-327e-4cdf-aa93-1f1e3429d0df] Running
	I0116 02:39:24.309965  452136 system_pods.go:61] "snapshot-controller-58dbcc7b99-jjn5l" [d3beee3a-589f-4da9-bfe0-86248103183a] Running
	I0116 02:39:24.309980  452136 system_pods.go:61] "storage-provisioner" [be7b92e6-21a5-4abc-8614-02bf7811cb37] Running
	I0116 02:39:24.309995  452136 system_pods.go:61] "tiller-deploy-7b677967b9-bjttm" [65952226-2c10-4145-9f5a-3a8193ea8a97] Running
	I0116 02:39:24.310021  452136 system_pods.go:74] duration metric: took 3.47595321s to wait for pod list to return data ...
	I0116 02:39:24.310045  452136 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:39:24.312891  452136 default_sa.go:45] found service account: "default"
	I0116 02:39:24.312988  452136 default_sa.go:55] duration metric: took 2.925968ms for default service account to be created ...
	I0116 02:39:24.313009  452136 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:39:24.322512  452136 system_pods.go:86] 19 kube-system pods found
	I0116 02:39:24.322539  452136 system_pods.go:89] "coredns-5dd5756b68-g2rlh" [a4fa8c1e-4b9b-4686-9466-7f5b2e73bff2] Running
	I0116 02:39:24.322545  452136 system_pods.go:89] "csi-hostpath-attacher-0" [1d1a9332-1b12-40a5-b2c1-ada8728e5fd8] Running
	I0116 02:39:24.322550  452136 system_pods.go:89] "csi-hostpath-resizer-0" [48a5f050-edcd-4b0e-b8e2-74eb87150914] Running
	I0116 02:39:24.322557  452136 system_pods.go:89] "csi-hostpathplugin-lrr68" [e6866fb7-bae1-4f32-a89f-e60c06d11935] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 02:39:24.322562  452136 system_pods.go:89] "etcd-addons-411655" [672836e6-a699-40e5-9860-20597a66dc3e] Running
	I0116 02:39:24.322568  452136 system_pods.go:89] "kindnet-vdzl4" [eca1f23a-afcd-477d-be9d-40862de5879a] Running
	I0116 02:39:24.322573  452136 system_pods.go:89] "kube-apiserver-addons-411655" [5a51ace6-2b20-470a-b3e6-c5931f603ef6] Running
	I0116 02:39:24.322577  452136 system_pods.go:89] "kube-controller-manager-addons-411655" [76191807-080b-4e6b-b539-67566843be15] Running
	I0116 02:39:24.322585  452136 system_pods.go:89] "kube-ingress-dns-minikube" [b464dc23-2aa6-4791-abfd-3fa458eb2f99] Running
	I0116 02:39:24.322590  452136 system_pods.go:89] "kube-proxy-hnr6q" [098413f3-79e2-4236-9b8d-ec6e78866e83] Running
	I0116 02:39:24.322596  452136 system_pods.go:89] "kube-scheduler-addons-411655" [6c26e5bb-15c6-48d7-9052-c76089623107] Running
	I0116 02:39:24.322601  452136 system_pods.go:89] "metrics-server-7c66d45ddc-m8lg8" [1bf5020c-76ff-4dcd-bff4-03851042ddaa] Running
	I0116 02:39:24.322608  452136 system_pods.go:89] "nvidia-device-plugin-daemonset-tr95k" [14e5b74a-4026-4787-94c2-6fa1eeb1e161] Running
	I0116 02:39:24.322612  452136 system_pods.go:89] "registry-96z6n" [a9935a40-1774-4d34-846a-3f21c4e26b94] Running
	I0116 02:39:24.322618  452136 system_pods.go:89] "registry-proxy-kkkjz" [8796e73d-33b4-46b2-b5e3-48bec46545b4] Running
	I0116 02:39:24.322622  452136 system_pods.go:89] "snapshot-controller-58dbcc7b99-7b46v" [c2043436-327e-4cdf-aa93-1f1e3429d0df] Running
	I0116 02:39:24.322628  452136 system_pods.go:89] "snapshot-controller-58dbcc7b99-jjn5l" [d3beee3a-589f-4da9-bfe0-86248103183a] Running
	I0116 02:39:24.322632  452136 system_pods.go:89] "storage-provisioner" [be7b92e6-21a5-4abc-8614-02bf7811cb37] Running
	I0116 02:39:24.322637  452136 system_pods.go:89] "tiller-deploy-7b677967b9-bjttm" [65952226-2c10-4145-9f5a-3a8193ea8a97] Running
	I0116 02:39:24.322643  452136 system_pods.go:126] duration metric: took 9.619382ms to wait for k8s-apps to be running ...
	I0116 02:39:24.322655  452136 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:39:24.322695  452136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:39:24.413956  452136 system_svc.go:56] duration metric: took 91.285656ms WaitForService to wait for kubelet.
	I0116 02:39:24.414041  452136 kubeadm.go:581] duration metric: took 1m27.099312985s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:39:24.414079  452136 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:39:24.417574  452136 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0116 02:39:24.417606  452136 node_conditions.go:123] node cpu capacity is 8
	I0116 02:39:24.417625  452136 node_conditions.go:105] duration metric: took 3.539265ms to run NodePressure ...
	I0116 02:39:24.417639  452136 start.go:228] waiting for startup goroutines ...
	I0116 02:39:24.522037  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:24.625956  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:25.018067  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:25.126221  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:25.517214  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:25.625928  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:26.017703  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:26.124021  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:26.517052  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:26.624783  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:27.016580  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:27.125061  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:27.516449  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:27.624390  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:28.017164  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:28.125314  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:28.517634  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:28.624862  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:29.018180  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:29.124881  452136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:39:29.517332  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:29.625136  452136 kapi.go:107] duration metric: took 1m26.005264508s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0116 02:39:30.017136  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:30.516648  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:31.017743  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:31.518266  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:32.016874  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:32.517395  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:33.016233  452136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:39:33.516340  452136 kapi.go:107] duration metric: took 1m29.005120203s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0116 02:39:33.550056  452136 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, helm-tiller, inspektor-gadget, yakd, metrics-server, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0116 02:39:33.563437  452136 addons.go:505] enable addons completed in 1m36.874248494s: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns helm-tiller inspektor-gadget yakd metrics-server default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0116 02:39:33.563500  452136 start.go:233] waiting for cluster config update ...
	I0116 02:39:33.563524  452136 start.go:242] writing updated cluster config ...
	I0116 02:39:33.564209  452136 ssh_runner.go:195] Run: rm -f paused
	I0116 02:39:33.614886  452136 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 02:39:33.679963  452136 out.go:177] * Done! kubectl is now configured to use "addons-411655" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 16 02:42:10 addons-411655 crio[944]: time="2024-01-16 02:42:10.850542808Z" level=info msg="Removing container: 2d2502fd1c2afdc8934f2e61539803b7a3ae2a660e99bc2ab6f41822fefcba0b" id=b8c1ac4e-c2d3-43bc-91e4-4d90095ecd4b name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 16 02:42:10 addons-411655 crio[944]: time="2024-01-16 02:42:10.929168890Z" level=info msg="Removed container 2d2502fd1c2afdc8934f2e61539803b7a3ae2a660e99bc2ab6f41822fefcba0b: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=b8c1ac4e-c2d3-43bc-91e4-4d90095ecd4b name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 16 02:42:11 addons-411655 crio[944]: time="2024-01-16 02:42:11.701951336Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=b2dccd45-44dc-44f3-aff1-e8bd7a4f6337 name=/runtime.v1.ImageService/PullImage
	Jan 16 02:42:11 addons-411655 crio[944]: time="2024-01-16 02:42:11.702862106Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=fe9245fd-d9c8-483d-83b6-73907281a686 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 02:42:11 addons-411655 crio[944]: time="2024-01-16 02:42:11.703807913Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=fe9245fd-d9c8-483d-83b6-73907281a686 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 02:42:11 addons-411655 crio[944]: time="2024-01-16 02:42:11.704763869Z" level=info msg="Creating container: default/hello-world-app-5d77478584-4cqjh/hello-world-app" id=5e163b28-31ca-4964-b210-cf36e79e5499 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 02:42:11 addons-411655 crio[944]: time="2024-01-16 02:42:11.704863260Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 16 02:42:11 addons-411655 crio[944]: time="2024-01-16 02:42:11.756581861Z" level=info msg="Created container ea9b5f129b6c0acc92ebb933d0c9d677712c429da110e42073bab73d66d47c04: default/hello-world-app-5d77478584-4cqjh/hello-world-app" id=5e163b28-31ca-4964-b210-cf36e79e5499 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 02:42:11 addons-411655 crio[944]: time="2024-01-16 02:42:11.757201383Z" level=info msg="Starting container: ea9b5f129b6c0acc92ebb933d0c9d677712c429da110e42073bab73d66d47c04" id=83978231-d968-4cef-9cd8-5ff9da436b9d name=/runtime.v1.RuntimeService/StartContainer
	Jan 16 02:42:11 addons-411655 crio[944]: time="2024-01-16 02:42:11.763373746Z" level=info msg="Started container" PID=10505 containerID=ea9b5f129b6c0acc92ebb933d0c9d677712c429da110e42073bab73d66d47c04 description=default/hello-world-app-5d77478584-4cqjh/hello-world-app id=83978231-d968-4cef-9cd8-5ff9da436b9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=9dcefe093ff1379844ea2c1432e9d52b9272b1f1f3372a2466d924584ec40a4f
	Jan 16 02:42:12 addons-411655 crio[944]: time="2024-01-16 02:42:12.431067514Z" level=info msg="Stopping container: aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b (timeout: 2s)" id=1b181149-a82f-415f-b5e1-2b80dff0e43a name=/runtime.v1.RuntimeService/StopContainer
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.437564929Z" level=warning msg="Stopping container aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=1b181149-a82f-415f-b5e1-2b80dff0e43a name=/runtime.v1.RuntimeService/StopContainer
	Jan 16 02:42:14 addons-411655 conmon[6532]: conmon aa6db0c001d275387c0f <ninfo>: container 6544 exited with status 137
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.569556152Z" level=info msg="Stopped container aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b: ingress-nginx/ingress-nginx-controller-69cff4fd79-4g9lm/controller" id=1b181149-a82f-415f-b5e1-2b80dff0e43a name=/runtime.v1.RuntimeService/StopContainer
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.570102440Z" level=info msg="Stopping pod sandbox: ef7731879535956df00ddd740929d6f961a6966666eb9b04f12c28a88652072f" id=b73cf23d-eb66-4774-a8e3-49d1a68726fe name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.573102523Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-J6YI3L2RQ7ILVS6R - [0:0]\n:KUBE-HP-UQDGHI2UJJ5EHJJV - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-UQDGHI2UJJ5EHJJV\n-X KUBE-HP-J6YI3L2RQ7ILVS6R\nCOMMIT\n"
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.574497119Z" level=info msg="Closing host port tcp:80"
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.574541716Z" level=info msg="Closing host port tcp:443"
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.575912409Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.575930034Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.576064261Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-4g9lm Namespace:ingress-nginx ID:ef7731879535956df00ddd740929d6f961a6966666eb9b04f12c28a88652072f UID:749b7a18-9a74-48e0-98b8-dd14c0057f7b NetNS:/var/run/netns/29ec369e-c361-4613-b95f-54646070a722 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.576175630Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-4g9lm from CNI network \"kindnet\" (type=ptp)"
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.617729147Z" level=info msg="Stopped pod sandbox: ef7731879535956df00ddd740929d6f961a6966666eb9b04f12c28a88652072f" id=b73cf23d-eb66-4774-a8e3-49d1a68726fe name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.863421446Z" level=info msg="Removing container: aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b" id=c9f0df97-b4e7-4f9c-b1b2-fb26f6b59154 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 16 02:42:14 addons-411655 crio[944]: time="2024-01-16 02:42:14.877472738Z" level=info msg="Removed container aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b: ingress-nginx/ingress-nginx-controller-69cff4fd79-4g9lm/controller" id=c9f0df97-b4e7-4f9c-b1b2-fb26f6b59154 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea9b5f129b6c0       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   9dcefe093ff13       hello-world-app-5d77478584-4cqjh
	650cfb4d14192       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   06a085c0f2c6d       headlamp-7ddfbb94ff-9t5sd
	249ca55aeb225       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   c6298e61facc0       nginx
	de7876dcf40fd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   f12186c75f42a       gcp-auth-d4c87556c-jhq8f
	d4e27639737f2       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   9831ba2420921       yakd-dashboard-9947fc6bf-8b6mj
	b7af6eee1021d       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     1                   fcb0402a94dee       ingress-nginx-admission-patch-4sbhm
	a8b9805145b43       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   924bb73bddc71       ingress-nginx-admission-create-lcdwq
	c421f326ac527       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   7c06373c40e1f       local-path-provisioner-78b46b4d5c-kh7q9
	75b99f4b12fe8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   aa5375b09e41a       coredns-5dd5756b68-g2rlh
	fdad01371d214       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   749423ffcdcf2       storage-provisioner
	7f4e879a2be6d       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   3a68375fbf73e       kube-proxy-hnr6q
	fad020ba8010b       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   a0fc51e399cd6       kindnet-vdzl4
	da9f34db7f0f0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   14412625961f5       kube-scheduler-addons-411655
	7fc50e357d121       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   59c28dd4c05bb       etcd-addons-411655
	0d7b8db496aea       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   a36e4e59e8f14       kube-apiserver-addons-411655
	27b4dfbecd9f1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   67d811eae57ce       kube-controller-manager-addons-411655
	
	
	==> coredns [75b99f4b12fe899e08e647004cfe96980e4e20ac612633683d16c2e6d4d8f903] <==
	[INFO] 10.244.0.18:55942 - 41789 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107021s
	[INFO] 10.244.0.18:43002 - 35650 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003715893s
	[INFO] 10.244.0.18:43002 - 33101 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004056657s
	[INFO] 10.244.0.18:44128 - 61824 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003292729s
	[INFO] 10.244.0.18:44128 - 23170 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00354118s
	[INFO] 10.244.0.18:45315 - 33883 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00329701s
	[INFO] 10.244.0.18:45315 - 35933 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003639823s
	[INFO] 10.244.0.18:51452 - 39763 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072345s
	[INFO] 10.244.0.18:51452 - 27996 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010924s
	[INFO] 10.244.0.21:57242 - 10660 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000196918s
	[INFO] 10.244.0.21:44037 - 12447 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000102593s
	[INFO] 10.244.0.21:42273 - 3295 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100103s
	[INFO] 10.244.0.21:35425 - 58310 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152096s
	[INFO] 10.244.0.21:43349 - 10594 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000149005s
	[INFO] 10.244.0.21:40217 - 20777 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000107613s
	[INFO] 10.244.0.21:41110 - 11022 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.004291352s
	[INFO] 10.244.0.21:59108 - 39665 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.004422206s
	[INFO] 10.244.0.21:55900 - 46685 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.003870932s
	[INFO] 10.244.0.21:54481 - 49993 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004273943s
	[INFO] 10.244.0.21:40918 - 52048 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003451428s
	[INFO] 10.244.0.21:45313 - 3417 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0036879s
	[INFO] 10.244.0.21:48981 - 39929 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.000557215s
	[INFO] 10.244.0.21:41943 - 11052 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000662371s
	[INFO] 10.244.0.23:43719 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000115846s
	[INFO] 10.244.0.23:45333 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090422s
	
	
	==> describe nodes <==
	Name:               addons-411655
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-411655
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=addons-411655
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_37_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-411655
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:37:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-411655
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:42:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:42:18 +0000   Tue, 16 Jan 2024 02:37:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:42:18 +0000   Tue, 16 Jan 2024 02:37:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:42:18 +0000   Tue, 16 Jan 2024 02:37:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:42:18 +0000   Tue, 16 Jan 2024 02:38:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-411655
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc40d20ca2554f3c9bf12eb1cf0d20fa
	  System UUID:                79e67873-d1e6-4236-a0e9-b01bc3926359
	  Boot ID:                    cc6eb99d-2787-4545-a9c9-22d5006806a3
	  Kernel Version:             5.15.0-1048-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-4cqjh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-d4c87556c-jhq8f                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  headlamp                    headlamp-7ddfbb94ff-9t5sd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 coredns-5dd5756b68-g2rlh                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m23s
	  kube-system                 etcd-addons-411655                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m35s
	  kube-system                 kindnet-vdzl4                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m23s
	  kube-system                 kube-apiserver-addons-411655               250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-controller-manager-addons-411655      200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-proxy-hnr6q                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-scheduler-addons-411655               100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  local-path-storage          local-path-provisioner-78b46b4d5c-kh7q9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-8b6mj             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m42s (x8 over 4m42s)  kubelet          Node addons-411655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s (x8 over 4m42s)  kubelet          Node addons-411655 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s (x8 over 4m42s)  kubelet          Node addons-411655 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s                  kubelet          Node addons-411655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s                  kubelet          Node addons-411655 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s                  kubelet          Node addons-411655 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m24s                  node-controller  Node addons-411655 event: Registered Node addons-411655 in Controller
	  Normal  NodeReady                3m51s                  kubelet          Node addons-411655 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe e9 d9 1c b8 51 08 06
	[  +0.084286] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e e7 0b 09 67 34 08 06
	[ +14.364060] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 88 99 8a 72 59 08 06
	[  +0.000350] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 7e e3 7c 5b 0d 3c 08 06
	[Jan16 02:31] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8e 26 6b 18 ff 64 08 06
	[  +0.000349] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 0e e7 0b 09 67 34 08 06
	[Jan16 02:39] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 02 88 c2 72 dd 06 40 bd ca 39 02 08 00
	[  +1.031587] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 02 88 c2 72 dd 06 40 bd ca 39 02 08 00
	[Jan16 02:40] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 02 88 c2 72 dd 06 40 bd ca 39 02 08 00
	[  +4.255671] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 02 88 c2 72 dd 06 40 bd ca 39 02 08 00
	[  +8.187383] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 02 88 c2 72 dd 06 40 bd ca 39 02 08 00
	[ +16.126755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 02 88 c2 72 dd 06 40 bd ca 39 02 08 00
	[Jan16 02:41] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 02 88 c2 72 dd 06 40 bd ca 39 02 08 00
	
	
	==> etcd [7fc50e357d121aba722ee080161f52f6e3b448b51f0852ecfafc508f04940d46] <==
	{"level":"info","ts":"2024-01-16T02:38:01.102305Z","caller":"traceutil/trace.go:171","msg":"trace[383897211] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"185.079549ms","start":"2024-01-16T02:38:00.917204Z","end":"2024-01-16T02:38:01.102284Z","steps":["trace[383897211] 'process raft request'  (duration: 183.847296ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:38:01.102855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.353792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-01-16T02:38:01.102938Z","caller":"traceutil/trace.go:171","msg":"trace[258081164] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:449; }","duration":"185.443551ms","start":"2024-01-16T02:38:00.917484Z","end":"2024-01-16T02:38:01.102927Z","steps":["trace[258081164] 'agreement among raft nodes before linearized reading'  (duration: 185.319495ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:38:01.103325Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.793745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-01-16T02:38:01.103403Z","caller":"traceutil/trace.go:171","msg":"trace[1277341387] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:449; }","duration":"185.873364ms","start":"2024-01-16T02:38:00.91752Z","end":"2024-01-16T02:38:01.103394Z","steps":["trace[1277341387] 'agreement among raft nodes before linearized reading'  (duration: 185.773227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:38:01.413882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.721783ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replication-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-01-16T02:38:01.414858Z","caller":"traceutil/trace.go:171","msg":"trace[1579262713] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replication-controller; range_end:; response_count:1; response_revision:464; }","duration":"105.700864ms","start":"2024-01-16T02:38:01.309136Z","end":"2024-01-16T02:38:01.414837Z","steps":["trace[1579262713] 'agreement among raft nodes before linearized reading'  (duration: 104.69643ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:38:01.41453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.465061ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T02:38:01.41554Z","caller":"traceutil/trace.go:171","msg":"trace[1347948967] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:464; }","duration":"106.478011ms","start":"2024-01-16T02:38:01.309047Z","end":"2024-01-16T02:38:01.415525Z","steps":["trace[1347948967] 'agreement among raft nodes before linearized reading'  (duration: 105.440338ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:38:39.986846Z","caller":"traceutil/trace.go:171","msg":"trace[986902339] transaction","detail":"{read_only:false; response_revision:1013; number_of_response:1; }","duration":"140.821703ms","start":"2024-01-16T02:38:39.846007Z","end":"2024-01-16T02:38:39.986829Z","steps":["trace[986902339] 'process raft request'  (duration: 140.70543ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:38:39.98683Z","caller":"traceutil/trace.go:171","msg":"trace[1162960163] linearizableReadLoop","detail":"{readStateIndex:1040; appliedIndex:1039; }","duration":"118.996938ms","start":"2024-01-16T02:38:39.867814Z","end":"2024-01-16T02:38:39.986811Z","steps":["trace[1162960163] 'read index received'  (duration: 118.893546ms)","trace[1162960163] 'applied index is now lower than readState.Index'  (duration: 102.694µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T02:38:39.98698Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.17167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T02:38:39.98701Z","caller":"traceutil/trace.go:171","msg":"trace[2054557761] range","detail":"{range_begin:/registry/secrets/ingress-nginx/ingress-nginx-admission; range_end:; response_count:0; response_revision:1013; }","duration":"119.217113ms","start":"2024-01-16T02:38:39.867782Z","end":"2024-01-16T02:38:39.987Z","steps":["trace[2054557761] 'agreement among raft nodes before linearized reading'  (duration: 119.094602ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:39:44.313611Z","caller":"traceutil/trace.go:171","msg":"trace[1161282556] transaction","detail":"{read_only:false; response_revision:1299; number_of_response:1; }","duration":"138.576363ms","start":"2024-01-16T02:39:44.175008Z","end":"2024-01-16T02:39:44.313584Z","steps":["trace[1161282556] 'process raft request'  (duration: 79.198934ms)","trace[1161282556] 'compare'  (duration: 59.198041ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T02:39:44.313731Z","caller":"traceutil/trace.go:171","msg":"trace[193843477] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"136.704597ms","start":"2024-01-16T02:39:44.177014Z","end":"2024-01-16T02:39:44.313719Z","steps":["trace[193843477] 'process raft request'  (duration: 136.655586ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:39:44.313836Z","caller":"traceutil/trace.go:171","msg":"trace[437567739] transaction","detail":"{read_only:false; response_revision:1301; number_of_response:1; }","duration":"138.495751ms","start":"2024-01-16T02:39:44.175321Z","end":"2024-01-16T02:39:44.313817Z","steps":["trace[437567739] 'process raft request'  (duration: 138.31448ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:39:44.314029Z","caller":"traceutil/trace.go:171","msg":"trace[1052529995] transaction","detail":"{read_only:false; response_revision:1300; number_of_response:1; }","duration":"138.933874ms","start":"2024-01-16T02:39:44.175081Z","end":"2024-01-16T02:39:44.314015Z","steps":["trace[1052529995] 'process raft request'  (duration: 138.437653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:39:44.605612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.521077ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/helm-test\" ","response":"range_response_count:1 size:2463"}
	{"level":"info","ts":"2024-01-16T02:39:44.605679Z","caller":"traceutil/trace.go:171","msg":"trace[1331547106] range","detail":"{range_begin:/registry/pods/kube-system/helm-test; range_end:; response_count:1; response_revision:1303; }","duration":"114.620977ms","start":"2024-01-16T02:39:44.491043Z","end":"2024-01-16T02:39:44.605664Z","steps":["trace[1331547106] 'range keys from in-memory index tree'  (duration: 114.368509ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:39:44.66759Z","caller":"traceutil/trace.go:171","msg":"trace[1885272948] transaction","detail":"{read_only:false; response_revision:1304; number_of_response:1; }","duration":"118.531435ms","start":"2024-01-16T02:39:44.549041Z","end":"2024-01-16T02:39:44.667572Z","steps":["trace[1885272948] 'process raft request'  (duration: 118.40129ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:40:47.91179Z","caller":"traceutil/trace.go:171","msg":"trace[178233857] transaction","detail":"{read_only:false; response_revision:1664; number_of_response:1; }","duration":"128.437212ms","start":"2024-01-16T02:40:47.78333Z","end":"2024-01-16T02:40:47.911767Z","steps":["trace[178233857] 'process raft request'  (duration: 61.213363ms)","trace[178233857] 'compare'  (duration: 67.075326ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T02:40:47.911872Z","caller":"traceutil/trace.go:171","msg":"trace[1344151346] linearizableReadLoop","detail":"{readStateIndex:1738; appliedIndex:1736; }","duration":"100.260388ms","start":"2024-01-16T02:40:47.811593Z","end":"2024-01-16T02:40:47.911854Z","steps":["trace[1344151346] 'read index received'  (duration: 32.947355ms)","trace[1344151346] 'applied index is now lower than readState.Index'  (duration: 67.31164ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T02:40:47.912008Z","caller":"traceutil/trace.go:171","msg":"trace[1853159749] transaction","detail":"{read_only:false; response_revision:1665; number_of_response:1; }","duration":"126.370078ms","start":"2024-01-16T02:40:47.785627Z","end":"2024-01-16T02:40:47.911997Z","steps":["trace[1853159749] 'process raft request'  (duration: 126.084603ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:40:47.912121Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.5295ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-01-16T02:40:47.912151Z","caller":"traceutil/trace.go:171","msg":"trace[1771487990] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1665; }","duration":"100.582673ms","start":"2024-01-16T02:40:47.811561Z","end":"2024-01-16T02:40:47.912143Z","steps":["trace[1771487990] 'agreement among raft nodes before linearized reading'  (duration: 100.352952ms)"],"step_count":1}
	
	
	==> gcp-auth [de7876dcf40fd38a6f9f25d7185c627040c598c27ea145df093a267dc224cc7f] <==
	2024/01/16 02:39:23 GCP Auth Webhook started!
	2024/01/16 02:39:38 Ready to marshal response ...
	2024/01/16 02:39:38 Ready to write response ...
	2024/01/16 02:39:43 Ready to marshal response ...
	2024/01/16 02:39:43 Ready to write response ...
	2024/01/16 02:39:45 Ready to marshal response ...
	2024/01/16 02:39:45 Ready to write response ...
	2024/01/16 02:39:49 Ready to marshal response ...
	2024/01/16 02:39:49 Ready to write response ...
	2024/01/16 02:39:49 Ready to marshal response ...
	2024/01/16 02:39:49 Ready to write response ...
	2024/01/16 02:40:04 Ready to marshal response ...
	2024/01/16 02:40:04 Ready to write response ...
	2024/01/16 02:40:05 Ready to marshal response ...
	2024/01/16 02:40:05 Ready to write response ...
	2024/01/16 02:40:05 Ready to marshal response ...
	2024/01/16 02:40:05 Ready to write response ...
	2024/01/16 02:40:05 Ready to marshal response ...
	2024/01/16 02:40:05 Ready to write response ...
	2024/01/16 02:40:33 Ready to marshal response ...
	2024/01/16 02:40:33 Ready to write response ...
	2024/01/16 02:41:02 Ready to marshal response ...
	2024/01/16 02:41:02 Ready to write response ...
	2024/01/16 02:42:08 Ready to marshal response ...
	2024/01/16 02:42:08 Ready to write response ...
	
	
	==> kernel <==
	 02:42:19 up  2:24,  0 users,  load average: 0.31, 0.96, 1.65
	Linux addons-411655 5.15.0-1048-gcp #56~20.04.1-Ubuntu SMP Fri Nov 24 16:52:37 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [fad020ba8010bc6f08acb7228b2381ea7b0ec492f62271212d32ad25880d262f] <==
	I0116 02:40:18.009500       1 main.go:227] handling current node
	I0116 02:40:28.022111       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:40:28.022144       1 main.go:227] handling current node
	I0116 02:40:38.026158       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:40:38.026180       1 main.go:227] handling current node
	I0116 02:40:48.035620       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:40:48.035645       1 main.go:227] handling current node
	I0116 02:40:58.039791       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:40:58.039813       1 main.go:227] handling current node
	I0116 02:41:08.043498       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:41:08.043525       1 main.go:227] handling current node
	I0116 02:41:18.047746       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:41:18.047781       1 main.go:227] handling current node
	I0116 02:41:28.060668       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:41:28.060701       1 main.go:227] handling current node
	I0116 02:41:38.064942       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:41:38.064965       1 main.go:227] handling current node
	I0116 02:41:48.078197       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:41:48.078228       1 main.go:227] handling current node
	I0116 02:41:58.081760       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:41:58.081783       1 main.go:227] handling current node
	I0116 02:42:08.093887       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:42:08.093908       1 main.go:227] handling current node
	I0116 02:42:18.097701       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:42:18.097725       1 main.go:227] handling current node
	
	
	==> kube-apiserver [0d7b8db496aea5ae9689c42b25bd0e262884ed04625506108e39a3264bbe2036] <==
	W0116 02:39:51.824463       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0116 02:40:05.163787       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0116 02:40:05.532560       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.187.115"}
	I0116 02:40:46.080628       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0116 02:41:18.059874       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:41:18.059938       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:41:18.067302       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:41:18.067358       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:41:18.073667       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:41:18.073800       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:41:18.074597       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:41:18.074630       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:41:18.083680       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:41:18.083801       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:41:18.085066       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:41:18.085104       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:41:18.106168       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:41:18.106306       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:41:18.110439       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:41:18.110479       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0116 02:41:19.074878       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0116 02:41:19.111093       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0116 02:41:19.119260       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0116 02:42:09.150209       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.100.191"}
	E0116 02:42:11.511158       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [27b4dfbecd9f12f0c1597b5634d5e5d0b74846974be246e761d818f76ff640f3] <==
	W0116 02:41:34.618926       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:41:34.618962       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:41:34.713799       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:41:34.713842       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:41:35.670443       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:41:35.670484       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:41:40.881688       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:41:40.881721       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:41:54.568307       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:41:54.568342       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:41:56.042712       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:41:56.042743       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:41:57.229351       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:41:57.229382       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 02:42:08.976204       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0116 02:42:08.984275       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-4cqjh"
	I0116 02:42:08.988885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="13.012804ms"
	I0116 02:42:08.993468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="4.534837ms"
	I0116 02:42:08.993546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.46µs"
	I0116 02:42:09.002169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="77.523µs"
	I0116 02:42:11.421535       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0116 02:42:11.422545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="7.572µs"
	I0116 02:42:11.425909       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0116 02:42:11.867364       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="4.965163ms"
	I0116 02:42:11.867453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.151µs"
	
	
	==> kube-proxy [7f4e879a2be6d68cddde103535394ef697b4a335799e487005c4fde0c142fa23] <==
	I0116 02:37:58.217193       1 server_others.go:69] "Using iptables proxy"
	I0116 02:37:58.618673       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0116 02:38:00.015425       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0116 02:38:00.815266       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:38:00.815324       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0116 02:38:00.815334       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0116 02:38:00.815376       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:38:00.815711       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:38:00.815733       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:38:00.823356       1 config.go:315] "Starting node config controller"
	I0116 02:38:00.823435       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:38:00.903756       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:38:00.904163       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:38:00.904306       1 config.go:188] "Starting service config controller"
	I0116 02:38:00.904325       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:38:01.009276       1 shared_informer.go:318] Caches are synced for service config
	I0116 02:38:01.208924       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 02:38:01.209564       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [da9f34db7f0f03c056746d0253bd5551bec8d894159cf12bd8a5a9ea49f153d0] <==
	W0116 02:37:40.810160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:37:40.810201       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:37:40.810236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 02:37:40.810329       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 02:37:40.810516       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:37:40.810613       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 02:37:40.810544       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 02:37:40.810928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 02:37:40.810382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 02:37:40.810468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:37:40.811117       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:37:40.810551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 02:37:40.810368       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:37:40.811188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 02:37:40.810737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:37:40.811218       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:37:40.810775       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 02:37:40.811245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 02:37:41.723587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:37:41.723632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:37:41.745016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:37:41.745057       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:37:41.772567       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:37:41.772597       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0116 02:37:44.707994       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 16 02:42:08 addons-411655 kubelet[1547]: I0116 02:42:08.991946    1547 memory_manager.go:346] "RemoveStaleState removing state" podUID="c2043436-327e-4cdf-aa93-1f1e3429d0df" containerName="volume-snapshot-controller"
	Jan 16 02:42:09 addons-411655 kubelet[1547]: I0116 02:42:09.200643    1547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6e94f19c-088d-4736-8bfd-9752e29bc50b-gcp-creds\") pod \"hello-world-app-5d77478584-4cqjh\" (UID: \"6e94f19c-088d-4736-8bfd-9752e29bc50b\") " pod="default/hello-world-app-5d77478584-4cqjh"
	Jan 16 02:42:09 addons-411655 kubelet[1547]: I0116 02:42:09.200720    1547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmr9p\" (UniqueName: \"kubernetes.io/projected/6e94f19c-088d-4736-8bfd-9752e29bc50b-kube-api-access-zmr9p\") pod \"hello-world-app-5d77478584-4cqjh\" (UID: \"6e94f19c-088d-4736-8bfd-9752e29bc50b\") " pod="default/hello-world-app-5d77478584-4cqjh"
	Jan 16 02:42:09 addons-411655 kubelet[1547]: W0116 02:42:09.623817    1547 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e6c8900ced86d05d54b8a705a2e90759b6992190aa1282aa5bdf448757002050/crio-9dcefe093ff1379844ea2c1432e9d52b9272b1f1f3372a2466d924584ec40a4f WatchSource:0}: Error finding container 9dcefe093ff1379844ea2c1432e9d52b9272b1f1f3372a2466d924584ec40a4f: Status 404 returned error can't find the container with id 9dcefe093ff1379844ea2c1432e9d52b9272b1f1f3372a2466d924584ec40a4f
	Jan 16 02:42:10 addons-411655 kubelet[1547]: I0116 02:42:10.107500    1547 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjgrd\" (UniqueName: \"kubernetes.io/projected/b464dc23-2aa6-4791-abfd-3fa458eb2f99-kube-api-access-jjgrd\") pod \"b464dc23-2aa6-4791-abfd-3fa458eb2f99\" (UID: \"b464dc23-2aa6-4791-abfd-3fa458eb2f99\") "
	Jan 16 02:42:10 addons-411655 kubelet[1547]: I0116 02:42:10.109394    1547 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b464dc23-2aa6-4791-abfd-3fa458eb2f99-kube-api-access-jjgrd" (OuterVolumeSpecName: "kube-api-access-jjgrd") pod "b464dc23-2aa6-4791-abfd-3fa458eb2f99" (UID: "b464dc23-2aa6-4791-abfd-3fa458eb2f99"). InnerVolumeSpecName "kube-api-access-jjgrd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 02:42:10 addons-411655 kubelet[1547]: I0116 02:42:10.208715    1547 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jjgrd\" (UniqueName: \"kubernetes.io/projected/b464dc23-2aa6-4791-abfd-3fa458eb2f99-kube-api-access-jjgrd\") on node \"addons-411655\" DevicePath \"\""
	Jan 16 02:42:10 addons-411655 kubelet[1547]: I0116 02:42:10.849590    1547 scope.go:117] "RemoveContainer" containerID="2d2502fd1c2afdc8934f2e61539803b7a3ae2a660e99bc2ab6f41822fefcba0b"
	Jan 16 02:42:10 addons-411655 kubelet[1547]: I0116 02:42:10.929503    1547 scope.go:117] "RemoveContainer" containerID="2d2502fd1c2afdc8934f2e61539803b7a3ae2a660e99bc2ab6f41822fefcba0b"
	Jan 16 02:42:10 addons-411655 kubelet[1547]: E0116 02:42:10.929981    1547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d2502fd1c2afdc8934f2e61539803b7a3ae2a660e99bc2ab6f41822fefcba0b\": container with ID starting with 2d2502fd1c2afdc8934f2e61539803b7a3ae2a660e99bc2ab6f41822fefcba0b not found: ID does not exist" containerID="2d2502fd1c2afdc8934f2e61539803b7a3ae2a660e99bc2ab6f41822fefcba0b"
	Jan 16 02:42:10 addons-411655 kubelet[1547]: I0116 02:42:10.930045    1547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d2502fd1c2afdc8934f2e61539803b7a3ae2a660e99bc2ab6f41822fefcba0b"} err="failed to get container status \"2d2502fd1c2afdc8934f2e61539803b7a3ae2a660e99bc2ab6f41822fefcba0b\": rpc error: code = NotFound desc = could not find container \"2d2502fd1c2afdc8934f2e61539803b7a3ae2a660e99bc2ab6f41822fefcba0b\": container with ID starting with 2d2502fd1c2afdc8934f2e61539803b7a3ae2a660e99bc2ab6f41822fefcba0b not found: ID does not exist"
	Jan 16 02:42:11 addons-411655 kubelet[1547]: I0116 02:42:11.719331    1547 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="25372c1e-21a8-4d81-b5af-9b2fc0e11ff4" path="/var/lib/kubelet/pods/25372c1e-21a8-4d81-b5af-9b2fc0e11ff4/volumes"
	Jan 16 02:42:11 addons-411655 kubelet[1547]: I0116 02:42:11.719811    1547 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="58c42916-c819-48c7-993b-e0f563a742b2" path="/var/lib/kubelet/pods/58c42916-c819-48c7-993b-e0f563a742b2/volumes"
	Jan 16 02:42:11 addons-411655 kubelet[1547]: I0116 02:42:11.720236    1547 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b464dc23-2aa6-4791-abfd-3fa458eb2f99" path="/var/lib/kubelet/pods/b464dc23-2aa6-4791-abfd-3fa458eb2f99/volumes"
	Jan 16 02:42:14 addons-411655 kubelet[1547]: I0116 02:42:14.737766    1547 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfndq\" (UniqueName: \"kubernetes.io/projected/749b7a18-9a74-48e0-98b8-dd14c0057f7b-kube-api-access-bfndq\") pod \"749b7a18-9a74-48e0-98b8-dd14c0057f7b\" (UID: \"749b7a18-9a74-48e0-98b8-dd14c0057f7b\") "
	Jan 16 02:42:14 addons-411655 kubelet[1547]: I0116 02:42:14.737814    1547 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/749b7a18-9a74-48e0-98b8-dd14c0057f7b-webhook-cert\") pod \"749b7a18-9a74-48e0-98b8-dd14c0057f7b\" (UID: \"749b7a18-9a74-48e0-98b8-dd14c0057f7b\") "
	Jan 16 02:42:14 addons-411655 kubelet[1547]: I0116 02:42:14.739593    1547 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749b7a18-9a74-48e0-98b8-dd14c0057f7b-kube-api-access-bfndq" (OuterVolumeSpecName: "kube-api-access-bfndq") pod "749b7a18-9a74-48e0-98b8-dd14c0057f7b" (UID: "749b7a18-9a74-48e0-98b8-dd14c0057f7b"). InnerVolumeSpecName "kube-api-access-bfndq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 02:42:14 addons-411655 kubelet[1547]: I0116 02:42:14.739779    1547 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749b7a18-9a74-48e0-98b8-dd14c0057f7b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "749b7a18-9a74-48e0-98b8-dd14c0057f7b" (UID: "749b7a18-9a74-48e0-98b8-dd14c0057f7b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:42:14 addons-411655 kubelet[1547]: I0116 02:42:14.838221    1547 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bfndq\" (UniqueName: \"kubernetes.io/projected/749b7a18-9a74-48e0-98b8-dd14c0057f7b-kube-api-access-bfndq\") on node \"addons-411655\" DevicePath \"\""
	Jan 16 02:42:14 addons-411655 kubelet[1547]: I0116 02:42:14.838264    1547 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/749b7a18-9a74-48e0-98b8-dd14c0057f7b-webhook-cert\") on node \"addons-411655\" DevicePath \"\""
	Jan 16 02:42:14 addons-411655 kubelet[1547]: I0116 02:42:14.862354    1547 scope.go:117] "RemoveContainer" containerID="aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b"
	Jan 16 02:42:14 addons-411655 kubelet[1547]: I0116 02:42:14.877743    1547 scope.go:117] "RemoveContainer" containerID="aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b"
	Jan 16 02:42:14 addons-411655 kubelet[1547]: E0116 02:42:14.878143    1547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b\": container with ID starting with aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b not found: ID does not exist" containerID="aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b"
	Jan 16 02:42:14 addons-411655 kubelet[1547]: I0116 02:42:14.878200    1547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b"} err="failed to get container status \"aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b\": rpc error: code = NotFound desc = could not find container \"aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b\": container with ID starting with aa6db0c001d275387c0fb8afc3178f41454070cf6fa21982d3dafbcb09d9ae1b not found: ID does not exist"
	Jan 16 02:42:15 addons-411655 kubelet[1547]: I0116 02:42:15.719600    1547 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="749b7a18-9a74-48e0-98b8-dd14c0057f7b" path="/var/lib/kubelet/pods/749b7a18-9a74-48e0-98b8-dd14c0057f7b/volumes"
	
	
	==> storage-provisioner [fdad01371d214d4de1dfd6372100e828a5cc28dd43e8e2211df0ac8857d1b1d9] <==
	I0116 02:38:28.937883       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 02:38:28.946405       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 02:38:28.946444       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 02:38:28.953704       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 02:38:28.953844       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-411655_727a39f1-3d34-4e88-9ce8-e34c5167044f!
	I0116 02:38:28.953844       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed9957b1-5188-4a5b-aedf-c9a7a09f7a74", APIVersion:"v1", ResourceVersion:"919", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-411655_727a39f1-3d34-4e88-9ce8-e34c5167044f became leader
	I0116 02:38:29.054523       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-411655_727a39f1-3d34-4e88-9ce8-e34c5167044f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-411655 -n addons-411655
helpers_test.go:261: (dbg) Run:  kubectl --context addons-411655 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.07s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (180.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-570599 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-570599 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.028612114s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-570599 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-570599 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ae88309d-a39b-40de-b0fb-fb88a2b29c48] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ae88309d-a39b-40de-b0fb-fb88a2b29c48] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.003256837s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-570599 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0116 02:49:33.714972  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:50:01.400760  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:50:53.306794  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:50:53.312077  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:50:53.322361  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:50:53.342665  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:50:53.382961  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:50:53.463200  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:50:53.623607  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:50:53.944352  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:50:54.585297  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:50:55.865602  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:50:58.427426  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:51:03.547779  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-570599 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.235540266s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-570599 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-570599 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0116 02:51:13.788349  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.00437639s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-570599 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-570599 addons disable ingress-dns --alsologtostderr -v=1: (1.54702998s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-570599 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-570599 addons disable ingress --alsologtostderr -v=1: (7.422208493s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-570599
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-570599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc0cbf3edcfa65dfde582d545499d2166ea93c0e30dde427785179de978fb255",
	        "Created": "2024-01-16T02:47:09.480445651Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 491694,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T02:47:09.718152266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/dc0cbf3edcfa65dfde582d545499d2166ea93c0e30dde427785179de978fb255/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc0cbf3edcfa65dfde582d545499d2166ea93c0e30dde427785179de978fb255/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc0cbf3edcfa65dfde582d545499d2166ea93c0e30dde427785179de978fb255/hosts",
	        "LogPath": "/var/lib/docker/containers/dc0cbf3edcfa65dfde582d545499d2166ea93c0e30dde427785179de978fb255/dc0cbf3edcfa65dfde582d545499d2166ea93c0e30dde427785179de978fb255-json.log",
	        "Name": "/ingress-addon-legacy-570599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-570599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-570599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c5416605bd02766e11ca6a60915cad6aeffe7c34a990df9816c3ee6fad26242e-init/diff:/var/lib/docker/overlay2/bba00fb4c7e32355be8b1614d52104fcb5f09794e9ed4467560e2767dcfd351b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5416605bd02766e11ca6a60915cad6aeffe7c34a990df9816c3ee6fad26242e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5416605bd02766e11ca6a60915cad6aeffe7c34a990df9816c3ee6fad26242e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5416605bd02766e11ca6a60915cad6aeffe7c34a990df9816c3ee6fad26242e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-570599",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-570599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-570599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-570599",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-570599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e68699ff6484296bb5570a9a678797104d6cff62a35cd33be2e8b44ec650fa3d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33222"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33218"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e68699ff6484",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-570599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dc0cbf3edcfa",
	                        "ingress-addon-legacy-570599"
	                    ],
	                    "NetworkID": "a93c91aaf1c043a4b688ccee2515b36b69d134c6857e0f936ce56f3f72f16363",
	                    "EndpointID": "6ff77240612e1a27b80823b7412cca1f37de6949a9c78b714720deb63f054b62",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-570599 -n ingress-addon-legacy-570599
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-570599 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-570599 logs -n 25: (1.072962573s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-380867 ssh findmnt        | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | -T /mount1                           |                             |         |         |                     |                     |
	| service        | functional-380867 service            | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | --namespace=default --https          |                             |         |         |                     |                     |
	|                | --url hello-node                     |                             |         |         |                     |                     |
	| ssh            | functional-380867 ssh findmnt        | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| ssh            | functional-380867 ssh findmnt        | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| service        | functional-380867                    | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | service hello-node --url             |                             |         |         |                     |                     |
	|                | --format={{.IP}}                     |                             |         |         |                     |                     |
	| mount          | -p functional-380867                 | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| service        | functional-380867 service            | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | hello-node --url                     |                             |         |         |                     |                     |
	| update-context | functional-380867                    | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-380867                    | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-380867                    | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-380867                    | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-380867                    | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-380867 ssh pgrep          | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-380867                    | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-380867 image build -t     | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | localhost/my-image:functional-380867 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-380867                    | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-380867 image ls           | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	| delete         | -p functional-380867                 | functional-380867           | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:46 UTC |
	| start          | -p ingress-addon-legacy-570599       | ingress-addon-legacy-570599 | jenkins | v1.32.0 | 16 Jan 24 02:46 UTC | 16 Jan 24 02:48 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-570599          | ingress-addon-legacy-570599 | jenkins | v1.32.0 | 16 Jan 24 02:48 UTC | 16 Jan 24 02:48 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-570599          | ingress-addon-legacy-570599 | jenkins | v1.32.0 | 16 Jan 24 02:48 UTC | 16 Jan 24 02:48 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-570599          | ingress-addon-legacy-570599 | jenkins | v1.32.0 | 16 Jan 24 02:48 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-570599 ip       | ingress-addon-legacy-570599 | jenkins | v1.32.0 | 16 Jan 24 02:51 UTC | 16 Jan 24 02:51 UTC |
	| addons         | ingress-addon-legacy-570599          | ingress-addon-legacy-570599 | jenkins | v1.32.0 | 16 Jan 24 02:51 UTC | 16 Jan 24 02:51 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-570599          | ingress-addon-legacy-570599 | jenkins | v1.32.0 | 16 Jan 24 02:51 UTC | 16 Jan 24 02:51 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:46:46
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:46:46.946524  491055 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:46:46.946785  491055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:46.946794  491055 out.go:309] Setting ErrFile to fd 2...
	I0116 02:46:46.946799  491055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:46.947028  491055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
	I0116 02:46:46.947650  491055 out.go:303] Setting JSON to false
	I0116 02:46:46.948656  491055 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8953,"bootTime":1705364254,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:46:46.948722  491055 start.go:138] virtualization: kvm guest
	I0116 02:46:46.950980  491055 out.go:177] * [ingress-addon-legacy-570599] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:46:46.952416  491055 notify.go:220] Checking for updates...
	I0116 02:46:46.952423  491055 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:46:46.953849  491055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:46:46.955155  491055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:46:46.956520  491055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	I0116 02:46:46.957794  491055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:46:46.958966  491055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:46:46.960433  491055 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:46:46.981589  491055 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:46:46.981703  491055 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:46:47.033594  491055 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2024-01-16 02:46:47.024002551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:46:47.033705  491055 docker.go:295] overlay module found
	I0116 02:46:47.035797  491055 out.go:177] * Using the docker driver based on user configuration
	I0116 02:46:47.037321  491055 start.go:298] selected driver: docker
	I0116 02:46:47.037336  491055 start.go:902] validating driver "docker" against <nil>
	I0116 02:46:47.037348  491055 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:46:47.038114  491055 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:46:47.088832  491055 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2024-01-16 02:46:47.080743091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:46:47.089085  491055 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:46:47.089316  491055 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:46:47.091251  491055 out.go:177] * Using Docker driver with root privileges
	I0116 02:46:47.092727  491055 cni.go:84] Creating CNI manager for ""
	I0116 02:46:47.092749  491055 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 02:46:47.092762  491055 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:46:47.092782  491055 start_flags.go:321] config:
	{Name:ingress-addon-legacy-570599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-570599 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:46:47.094339  491055 out.go:177] * Starting control plane node ingress-addon-legacy-570599 in cluster ingress-addon-legacy-570599
	I0116 02:46:47.095665  491055 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 02:46:47.097084  491055 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:46:47.098492  491055 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 02:46:47.098522  491055 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:46:47.114293  491055 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 02:46:47.114320  491055 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 02:46:47.513967  491055 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0116 02:46:47.514006  491055 cache.go:56] Caching tarball of preloaded images
	I0116 02:46:47.514194  491055 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 02:46:47.516177  491055 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0116 02:46:47.517686  491055 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:46:47.628275  491055 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0116 02:47:01.242785  491055 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:47:01.242886  491055 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:47:02.255905  491055 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0116 02:47:02.256293  491055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/config.json ...
	I0116 02:47:02.256329  491055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/config.json: {Name:mk26978949b3be2ddd97bb45e22c19fe7c477adc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:02.256509  491055 cache.go:194] Successfully downloaded all kic artifacts
	I0116 02:47:02.256536  491055 start.go:365] acquiring machines lock for ingress-addon-legacy-570599: {Name:mkefb68eb8322cf9e68b9bfff0e2c2a9dfea7ea6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:47:02.256578  491055 start.go:369] acquired machines lock for "ingress-addon-legacy-570599" in 30.984µs
	I0116 02:47:02.256597  491055 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-570599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-570599 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:47:02.256679  491055 start.go:125] createHost starting for "" (driver="docker")
	I0116 02:47:02.259307  491055 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0116 02:47:02.259532  491055 start.go:159] libmachine.API.Create for "ingress-addon-legacy-570599" (driver="docker")
	I0116 02:47:02.259556  491055 client.go:168] LocalClient.Create starting
	I0116 02:47:02.259610  491055 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem
	I0116 02:47:02.259640  491055 main.go:141] libmachine: Decoding PEM data...
	I0116 02:47:02.259657  491055 main.go:141] libmachine: Parsing certificate...
	I0116 02:47:02.259711  491055 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem
	I0116 02:47:02.259731  491055 main.go:141] libmachine: Decoding PEM data...
	I0116 02:47:02.259739  491055 main.go:141] libmachine: Parsing certificate...
	I0116 02:47:02.260069  491055 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-570599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 02:47:02.275690  491055 cli_runner.go:211] docker network inspect ingress-addon-legacy-570599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 02:47:02.275775  491055 network_create.go:281] running [docker network inspect ingress-addon-legacy-570599] to gather additional debugging logs...
	I0116 02:47:02.275797  491055 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-570599
	W0116 02:47:02.290840  491055 cli_runner.go:211] docker network inspect ingress-addon-legacy-570599 returned with exit code 1
	I0116 02:47:02.290870  491055 network_create.go:284] error running [docker network inspect ingress-addon-legacy-570599]: docker network inspect ingress-addon-legacy-570599: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-570599 not found
	I0116 02:47:02.290886  491055 network_create.go:286] output of [docker network inspect ingress-addon-legacy-570599]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-570599 not found
	
	** /stderr **
	I0116 02:47:02.291014  491055 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:47:02.306535  491055 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005cb8a0}
	I0116 02:47:02.306583  491055 network_create.go:124] attempt to create docker network ingress-addon-legacy-570599 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0116 02:47:02.306627  491055 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-570599 ingress-addon-legacy-570599
	I0116 02:47:02.359489  491055 network_create.go:108] docker network ingress-addon-legacy-570599 192.168.49.0/24 created
	I0116 02:47:02.359524  491055 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-570599" container
	I0116 02:47:02.359580  491055 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 02:47:02.374614  491055 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-570599 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-570599 --label created_by.minikube.sigs.k8s.io=true
	I0116 02:47:02.392182  491055 oci.go:103] Successfully created a docker volume ingress-addon-legacy-570599
	I0116 02:47:02.392319  491055 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-570599-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-570599 --entrypoint /usr/bin/test -v ingress-addon-legacy-570599:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 02:47:04.116281  491055 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-570599-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-570599 --entrypoint /usr/bin/test -v ingress-addon-legacy-570599:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.72389167s)
	I0116 02:47:04.116334  491055 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-570599
	I0116 02:47:04.116351  491055 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 02:47:04.116374  491055 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 02:47:04.116443  491055 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-570599:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 02:47:09.414667  491055 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-570599:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.298180194s)
	I0116 02:47:09.414712  491055 kic.go:203] duration metric: took 5.298327 seconds to extract preloaded images to volume
	W0116 02:47:09.414841  491055 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 02:47:09.414929  491055 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 02:47:09.466266  491055 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-570599 --name ingress-addon-legacy-570599 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-570599 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-570599 --network ingress-addon-legacy-570599 --ip 192.168.49.2 --volume ingress-addon-legacy-570599:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 02:47:09.726478  491055 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570599 --format={{.State.Running}}
	I0116 02:47:09.743826  491055 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570599 --format={{.State.Status}}
	I0116 02:47:09.762064  491055 cli_runner.go:164] Run: docker exec ingress-addon-legacy-570599 stat /var/lib/dpkg/alternatives/iptables
	I0116 02:47:09.801471  491055 oci.go:144] the created container "ingress-addon-legacy-570599" has a running status.
	I0116 02:47:09.801505  491055 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/ingress-addon-legacy-570599/id_rsa...
	I0116 02:47:09.953078  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/ingress-addon-legacy-570599/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0116 02:47:09.953126  491055 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17965-443749/.minikube/machines/ingress-addon-legacy-570599/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 02:47:09.972158  491055 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570599 --format={{.State.Status}}
	I0116 02:47:09.988740  491055 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 02:47:09.988761  491055 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-570599 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 02:47:10.030344  491055 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570599 --format={{.State.Status}}
	I0116 02:47:10.057569  491055 machine.go:88] provisioning docker machine ...
	I0116 02:47:10.057609  491055 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-570599"
	I0116 02:47:10.057661  491055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570599
	I0116 02:47:10.074103  491055 main.go:141] libmachine: Using SSH client type: native
	I0116 02:47:10.074462  491055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33222 <nil> <nil>}
	I0116 02:47:10.074479  491055 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-570599 && echo "ingress-addon-legacy-570599" | sudo tee /etc/hostname
	I0116 02:47:10.302338  491055 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-570599
	
	I0116 02:47:10.302420  491055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570599
	I0116 02:47:10.319960  491055 main.go:141] libmachine: Using SSH client type: native
	I0116 02:47:10.320319  491055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33222 <nil> <nil>}
	I0116 02:47:10.320341  491055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-570599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-570599/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-570599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:47:10.456276  491055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:47:10.456320  491055 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17965-443749/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-443749/.minikube}
	I0116 02:47:10.456351  491055 ubuntu.go:177] setting up certificates
	I0116 02:47:10.456369  491055 provision.go:83] configureAuth start
	I0116 02:47:10.456448  491055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-570599
	I0116 02:47:10.472761  491055 provision.go:138] copyHostCerts
	I0116 02:47:10.472800  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem
	I0116 02:47:10.472840  491055 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem, removing ...
	I0116 02:47:10.472849  491055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem
	I0116 02:47:10.472918  491055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem (1078 bytes)
	I0116 02:47:10.472988  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem
	I0116 02:47:10.473005  491055 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem, removing ...
	I0116 02:47:10.473010  491055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem
	I0116 02:47:10.473032  491055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem (1123 bytes)
	I0116 02:47:10.473077  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem
	I0116 02:47:10.473093  491055 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem, removing ...
	I0116 02:47:10.473099  491055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem
	I0116 02:47:10.473118  491055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem (1675 bytes)
	I0116 02:47:10.473162  491055 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-570599 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-570599]
	I0116 02:47:10.534829  491055 provision.go:172] copyRemoteCerts
	I0116 02:47:10.534889  491055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:47:10.534925  491055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570599
	I0116 02:47:10.550731  491055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33222 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/ingress-addon-legacy-570599/id_rsa Username:docker}
	I0116 02:47:10.644755  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:47:10.644810  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 02:47:10.665940  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:47:10.665994  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0116 02:47:10.686325  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:47:10.686387  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 02:47:10.706876  491055 provision.go:86] duration metric: configureAuth took 250.492186ms
	I0116 02:47:10.706904  491055 ubuntu.go:193] setting minikube options for container-runtime
	I0116 02:47:10.707072  491055 config.go:182] Loaded profile config "ingress-addon-legacy-570599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 02:47:10.707174  491055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570599
	I0116 02:47:10.722991  491055 main.go:141] libmachine: Using SSH client type: native
	I0116 02:47:10.723463  491055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33222 <nil> <nil>}
	I0116 02:47:10.723487  491055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:47:10.964180  491055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:47:10.964216  491055 machine.go:91] provisioned docker machine in 906.624374ms
	I0116 02:47:10.964230  491055 client.go:171] LocalClient.Create took 8.704667959s
	I0116 02:47:10.964273  491055 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-570599" took 8.704726273s
	I0116 02:47:10.964290  491055 start.go:300] post-start starting for "ingress-addon-legacy-570599" (driver="docker")
	I0116 02:47:10.964310  491055 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:47:10.964390  491055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:47:10.964442  491055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570599
	I0116 02:47:10.979798  491055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33222 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/ingress-addon-legacy-570599/id_rsa Username:docker}
	I0116 02:47:11.073185  491055 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:47:11.076135  491055 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 02:47:11.076181  491055 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 02:47:11.076190  491055 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 02:47:11.076198  491055 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 02:47:11.076212  491055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-443749/.minikube/addons for local assets ...
	I0116 02:47:11.076290  491055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-443749/.minikube/files for local assets ...
	I0116 02:47:11.076372  491055 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem -> 4505732.pem in /etc/ssl/certs
	I0116 02:47:11.076383  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem -> /etc/ssl/certs/4505732.pem
	I0116 02:47:11.076466  491055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:47:11.084104  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem --> /etc/ssl/certs/4505732.pem (1708 bytes)
	I0116 02:47:11.105449  491055 start.go:303] post-start completed in 141.136139ms
	I0116 02:47:11.105783  491055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-570599
	I0116 02:47:11.121472  491055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/config.json ...
	I0116 02:47:11.121743  491055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:47:11.121803  491055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570599
	I0116 02:47:11.137382  491055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33222 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/ingress-addon-legacy-570599/id_rsa Username:docker}
	I0116 02:47:11.229035  491055 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 02:47:11.233183  491055 start.go:128] duration metric: createHost completed in 8.976487138s
	I0116 02:47:11.233210  491055 start.go:83] releasing machines lock for "ingress-addon-legacy-570599", held for 8.976620309s
	I0116 02:47:11.233287  491055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-570599
	I0116 02:47:11.249367  491055 ssh_runner.go:195] Run: cat /version.json
	I0116 02:47:11.249400  491055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:47:11.249423  491055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570599
	I0116 02:47:11.249460  491055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570599
	I0116 02:47:11.265601  491055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33222 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/ingress-addon-legacy-570599/id_rsa Username:docker}
	I0116 02:47:11.266593  491055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33222 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/ingress-addon-legacy-570599/id_rsa Username:docker}
	I0116 02:47:11.444828  491055 ssh_runner.go:195] Run: systemctl --version
	I0116 02:47:11.448899  491055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:47:11.584725  491055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:47:11.588917  491055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:47:11.606111  491055 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 02:47:11.606196  491055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:47:11.633146  491055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 02:47:11.633170  491055 start.go:475] detecting cgroup driver to use...
	I0116 02:47:11.633208  491055 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 02:47:11.633255  491055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:47:11.646750  491055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:47:11.656956  491055 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:47:11.657014  491055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:47:11.669251  491055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:47:11.682035  491055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:47:11.766123  491055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:47:11.833202  491055 docker.go:233] disabling docker service ...
	I0116 02:47:11.833261  491055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:47:11.850621  491055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:47:11.860999  491055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:47:11.937679  491055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:47:12.027069  491055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:47:12.037348  491055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:47:12.051737  491055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0116 02:47:12.051804  491055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:47:12.060693  491055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:47:12.060766  491055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:47:12.069648  491055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:47:12.078553  491055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:47:12.087220  491055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:47:12.095214  491055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:47:12.102298  491055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:47:12.109599  491055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:47:12.180711  491055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:47:12.291446  491055 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:47:12.291504  491055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:47:12.294769  491055 start.go:543] Will wait 60s for crictl version
	I0116 02:47:12.294813  491055 ssh_runner.go:195] Run: which crictl
	I0116 02:47:12.297658  491055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:47:12.329354  491055 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 02:47:12.329449  491055 ssh_runner.go:195] Run: crio --version
	I0116 02:47:12.362887  491055 ssh_runner.go:195] Run: crio --version
	I0116 02:47:12.399191  491055 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0116 02:47:12.400712  491055 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-570599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:47:12.416635  491055 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0116 02:47:12.420236  491055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:47:12.430281  491055 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 02:47:12.430342  491055 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:47:12.472561  491055 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 02:47:12.472636  491055 ssh_runner.go:195] Run: which lz4
	I0116 02:47:12.475884  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0116 02:47:12.475975  491055 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 02:47:12.478938  491055 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:47:12.478965  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0116 02:47:13.405019  491055 crio.go:444] Took 0.929077 seconds to copy over tarball
	I0116 02:47:13.405087  491055 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 02:47:15.652790  491055 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.247658031s)
	I0116 02:47:15.652819  491055 crio.go:451] Took 2.247770 seconds to extract the tarball
	I0116 02:47:15.652829  491055 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 02:47:15.724153  491055 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:47:15.755481  491055 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 02:47:15.755505  491055 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 02:47:15.755563  491055 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:47:15.755574  491055 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:47:15.755596  491055 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0116 02:47:15.755613  491055 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0116 02:47:15.755660  491055 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0116 02:47:15.755682  491055 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:47:15.755597  491055 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:47:15.755575  491055 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:47:15.756637  491055 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0116 02:47:15.756649  491055 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:47:15.756682  491055 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:47:15.756639  491055 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0116 02:47:15.756773  491055 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:47:15.756639  491055 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:47:15.756644  491055 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0116 02:47:15.756923  491055 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:47:15.878681  491055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:47:15.881653  491055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0116 02:47:15.882980  491055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:47:15.886098  491055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0116 02:47:15.904315  491055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:47:15.910570  491055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0116 02:47:15.919795  491055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:47:15.932681  491055 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0116 02:47:15.932729  491055 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:47:15.932761  491055 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0116 02:47:15.932774  491055 ssh_runner.go:195] Run: which crictl
	I0116 02:47:15.932783  491055 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0116 02:47:15.932806  491055 ssh_runner.go:195] Run: which crictl
	I0116 02:47:16.002744  491055 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0116 02:47:16.002798  491055 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0116 02:47:16.002796  491055 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:47:16.002849  491055 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0116 02:47:16.002889  491055 ssh_runner.go:195] Run: which crictl
	I0116 02:47:16.002897  491055 ssh_runner.go:195] Run: which crictl
	I0116 02:47:16.024592  491055 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0116 02:47:16.024637  491055 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0116 02:47:16.024643  491055 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:47:16.024669  491055 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0116 02:47:16.024690  491055 ssh_runner.go:195] Run: which crictl
	I0116 02:47:16.024710  491055 ssh_runner.go:195] Run: which crictl
	I0116 02:47:16.031677  491055 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0116 02:47:16.031721  491055 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:47:16.031743  491055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:47:16.031752  491055 ssh_runner.go:195] Run: which crictl
	I0116 02:47:16.031786  491055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0116 02:47:16.031840  491055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0116 02:47:16.031856  491055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:47:16.031888  491055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0116 02:47:16.031889  491055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:47:16.035951  491055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:47:16.214338  491055 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-443749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0116 02:47:16.214360  491055 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-443749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0116 02:47:16.214379  491055 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-443749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0116 02:47:16.218040  491055 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-443749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0116 02:47:16.218116  491055 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-443749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0116 02:47:16.218140  491055 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-443749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0116 02:47:16.218197  491055 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-443749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0116 02:47:17.072049  491055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:47:17.207765  491055 cache_images.go:92] LoadImages completed in 1.452241519s
	W0116 02:47:17.207879  491055 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17965-443749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0116 02:47:17.207978  491055 ssh_runner.go:195] Run: crio config
	I0116 02:47:17.250290  491055 cni.go:84] Creating CNI manager for ""
	I0116 02:47:17.250313  491055 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 02:47:17.250334  491055 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:47:17.250365  491055 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-570599 NodeName:ingress-addon-legacy-570599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 02:47:17.250512  491055 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-570599"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:47:17.250600  491055 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-570599 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-570599 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:47:17.250671  491055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0116 02:47:17.258682  491055 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:47:17.258750  491055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:47:17.266578  491055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0116 02:47:17.282058  491055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0116 02:47:17.297285  491055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0116 02:47:17.312607  491055 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0116 02:47:17.315656  491055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:47:17.325143  491055 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599 for IP: 192.168.49.2
	I0116 02:47:17.325170  491055 certs.go:190] acquiring lock for shared ca certs: {Name:mk8883b8c07de4938a73ea389443b00589415803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:17.325327  491055 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key
	I0116 02:47:17.325380  491055 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key
	I0116 02:47:17.325427  491055 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.key
	I0116 02:47:17.325442  491055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt with IP's: []
	I0116 02:47:17.400103  491055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt ...
	I0116 02:47:17.400139  491055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: {Name:mkd3d51da4be3e97d667fc8b127f16e2a33e3ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:17.400354  491055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.key ...
	I0116 02:47:17.400372  491055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.key: {Name:mke378ef889d7c5035eea7d720aa45c826eefa60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:17.400480  491055 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.key.dd3b5fb2
	I0116 02:47:17.400507  491055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:47:17.455745  491055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.crt.dd3b5fb2 ...
	I0116 02:47:17.455778  491055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.crt.dd3b5fb2: {Name:mka03148a57c884b8c11f046d21f92a9be14ebdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:17.455949  491055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.key.dd3b5fb2 ...
	I0116 02:47:17.455982  491055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.key.dd3b5fb2: {Name:mk0d07a125b8ff884bc04d7fb1ad19545adbe5c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:17.456090  491055 certs.go:337] copying /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.crt
	I0116 02:47:17.456185  491055 certs.go:341] copying /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.key
	I0116 02:47:17.456277  491055 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/proxy-client.key
	I0116 02:47:17.456304  491055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/proxy-client.crt with IP's: []
	I0116 02:47:17.755650  491055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/proxy-client.crt ...
	I0116 02:47:17.755688  491055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/proxy-client.crt: {Name:mk964db91f9be3b82f174839d1e57e6a88717ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:17.755882  491055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/proxy-client.key ...
	I0116 02:47:17.755909  491055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/proxy-client.key: {Name:mkb93e807c87ff96042b5715b9a6dd6746191470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:17.756025  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 02:47:17.756048  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 02:47:17.756058  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 02:47:17.756069  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 02:47:17.756084  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:47:17.756105  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:47:17.756124  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:47:17.756141  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:47:17.756223  491055 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573.pem (1338 bytes)
	W0116 02:47:17.756293  491055 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573_empty.pem, impossibly tiny 0 bytes
	I0116 02:47:17.756309  491055 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 02:47:17.756341  491055 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem (1078 bytes)
	I0116 02:47:17.756377  491055 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:47:17.756409  491055 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem (1675 bytes)
	I0116 02:47:17.756464  491055 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem (1708 bytes)
	I0116 02:47:17.756520  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem -> /usr/share/ca-certificates/4505732.pem
	I0116 02:47:17.756540  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:47:17.756555  491055 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573.pem -> /usr/share/ca-certificates/450573.pem
	I0116 02:47:17.757199  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:47:17.778790  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 02:47:17.799699  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:47:17.820027  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 02:47:17.840295  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:47:17.861013  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:47:17.881670  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:47:17.902159  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:47:17.922674  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem --> /usr/share/ca-certificates/4505732.pem (1708 bytes)
	I0116 02:47:17.943104  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:47:17.963733  491055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573.pem --> /usr/share/ca-certificates/450573.pem (1338 bytes)
	I0116 02:47:17.984587  491055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:47:18.000234  491055 ssh_runner.go:195] Run: openssl version
	I0116 02:47:18.005264  491055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:47:18.013737  491055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:47:18.017004  491055 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:47:18.017076  491055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:47:18.023175  491055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:47:18.031278  491055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/450573.pem && ln -fs /usr/share/ca-certificates/450573.pem /etc/ssl/certs/450573.pem"
	I0116 02:47:18.039361  491055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/450573.pem
	I0116 02:47:18.042287  491055 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:43 /usr/share/ca-certificates/450573.pem
	I0116 02:47:18.042326  491055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/450573.pem
	I0116 02:47:18.048349  491055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/450573.pem /etc/ssl/certs/51391683.0"
	I0116 02:47:18.056395  491055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4505732.pem && ln -fs /usr/share/ca-certificates/4505732.pem /etc/ssl/certs/4505732.pem"
	I0116 02:47:18.064164  491055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4505732.pem
	I0116 02:47:18.067223  491055 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:43 /usr/share/ca-certificates/4505732.pem
	I0116 02:47:18.067263  491055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4505732.pem
	I0116 02:47:18.073320  491055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4505732.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:47:18.081305  491055 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:47:18.084189  491055 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:47:18.084242  491055 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-570599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-570599 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:47:18.084350  491055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 02:47:18.084388  491055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:47:18.116464  491055 cri.go:89] found id: ""
	I0116 02:47:18.116540  491055 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:47:18.124418  491055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:47:18.132076  491055 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 02:47:18.132122  491055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:47:18.139481  491055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:47:18.139537  491055 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 02:47:18.180750  491055 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0116 02:47:18.180821  491055 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:47:18.217693  491055 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 02:47:18.217770  491055 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-gcp
	I0116 02:47:18.217810  491055 kubeadm.go:322] OS: Linux
	I0116 02:47:18.217878  491055 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 02:47:18.217946  491055 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 02:47:18.218015  491055 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 02:47:18.218085  491055 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 02:47:18.218132  491055 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 02:47:18.218199  491055 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 02:47:18.285621  491055 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:47:18.285764  491055 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:47:18.285876  491055 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:47:18.467746  491055 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:47:18.468632  491055 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:47:18.468728  491055 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:47:18.543706  491055 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:47:18.547390  491055 out.go:204]   - Generating certificates and keys ...
	I0116 02:47:18.547505  491055 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:47:18.547607  491055 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:47:18.779419  491055 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:47:19.003586  491055 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:47:19.242049  491055 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:47:19.364065  491055 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:47:19.624618  491055 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:47:19.624743  491055 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-570599 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 02:47:19.793496  491055 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:47:19.793634  491055 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-570599 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 02:47:20.056168  491055 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:47:20.223274  491055 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:47:20.412954  491055 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:47:20.413064  491055 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:47:20.851525  491055 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:47:21.083552  491055 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:47:21.153165  491055 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:47:21.310483  491055 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:47:21.311176  491055 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:47:21.314265  491055 out.go:204]   - Booting up control plane ...
	I0116 02:47:21.314350  491055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:47:21.317871  491055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:47:21.319413  491055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:47:21.320320  491055 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:47:21.322863  491055 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:47:28.325544  491055 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002229 seconds
	I0116 02:47:28.325741  491055 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:47:28.336667  491055 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:47:28.850922  491055 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:47:28.851174  491055 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-570599 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 02:47:29.357753  491055 kubeadm.go:322] [bootstrap-token] Using token: oowmyh.meiq9ouypotmy6bs
	I0116 02:47:29.359241  491055 out.go:204]   - Configuring RBAC rules ...
	I0116 02:47:29.359370  491055 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:47:29.362926  491055 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:47:29.368629  491055 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:47:29.370245  491055 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:47:29.372061  491055 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:47:29.374930  491055 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:47:29.381149  491055 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:47:29.592851  491055 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:47:29.773861  491055 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:47:29.774812  491055 kubeadm.go:322] 
	I0116 02:47:29.774901  491055 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:47:29.774912  491055 kubeadm.go:322] 
	I0116 02:47:29.774997  491055 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:47:29.775006  491055 kubeadm.go:322] 
	I0116 02:47:29.775064  491055 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:47:29.775156  491055 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:47:29.775235  491055 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:47:29.775250  491055 kubeadm.go:322] 
	I0116 02:47:29.775325  491055 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:47:29.775435  491055 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:47:29.775552  491055 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:47:29.775560  491055 kubeadm.go:322] 
	I0116 02:47:29.775656  491055 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:47:29.775791  491055 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:47:29.775800  491055 kubeadm.go:322] 
	I0116 02:47:29.775872  491055 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token oowmyh.meiq9ouypotmy6bs \
	I0116 02:47:29.775962  491055 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8cf2f52e6e786139868a71d0da6c4e60f90166b48a1f8c1755e09d650797d85a \
	I0116 02:47:29.775984  491055 kubeadm.go:322]     --control-plane 
	I0116 02:47:29.775990  491055 kubeadm.go:322] 
	I0116 02:47:29.776070  491055 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:47:29.776076  491055 kubeadm.go:322] 
	I0116 02:47:29.776145  491055 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token oowmyh.meiq9ouypotmy6bs \
	I0116 02:47:29.776279  491055 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8cf2f52e6e786139868a71d0da6c4e60f90166b48a1f8c1755e09d650797d85a 
	I0116 02:47:29.777475  491055 kubeadm.go:322] W0116 02:47:18.180196    1368 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0116 02:47:29.777650  491055 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-gcp\n", err: exit status 1
	I0116 02:47:29.777734  491055 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:47:29.777838  491055 kubeadm.go:322] W0116 02:47:21.317649    1368 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 02:47:29.777938  491055 kubeadm.go:322] W0116 02:47:21.319200    1368 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 02:47:29.777954  491055 cni.go:84] Creating CNI manager for ""
	I0116 02:47:29.777965  491055 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 02:47:29.779799  491055 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 02:47:29.781318  491055 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:47:29.785147  491055 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0116 02:47:29.785170  491055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:47:29.801171  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:47:30.250249  491055 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:47:30.250341  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:30.250352  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=ingress-addon-legacy-570599 minikube.k8s.io/updated_at=2024_01_16T02_47_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:30.257222  491055 ops.go:34] apiserver oom_adj: -16
	I0116 02:47:30.403722  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:30.904524  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:31.404609  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:31.904576  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:32.404807  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:32.904741  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:33.404570  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:33.904468  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:34.404113  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:34.903845  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:35.403853  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:35.904728  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:36.404527  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:36.904672  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:37.404135  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:37.904449  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:38.404527  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:38.904804  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:39.404014  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:39.904595  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:40.403967  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:40.904636  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:41.403872  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:41.904640  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:42.404645  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:42.904863  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:43.404302  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:43.903896  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:44.403997  491055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:47:44.719724  491055 kubeadm.go:1088] duration metric: took 14.469465405s to wait for elevateKubeSystemPrivileges.
	I0116 02:47:44.719757  491055 kubeadm.go:406] StartCluster complete in 26.635519083s
	I0116 02:47:44.719775  491055 settings.go:142] acquiring lock: {Name:mk9828dcd1e8ccfccc84768ea3ab177cb7be8afc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:44.719826  491055 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:47:44.720563  491055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/kubeconfig: {Name:mka24a12b8e1d963a345dadb59b1cdf4f4debade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:44.721013  491055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:47:44.721025  491055 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 02:47:44.721126  491055 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-570599"
	I0116 02:47:44.721194  491055 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-570599"
	I0116 02:47:44.721125  491055 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-570599"
	I0116 02:47:44.721240  491055 config.go:182] Loaded profile config "ingress-addon-legacy-570599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 02:47:44.721249  491055 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-570599"
	I0116 02:47:44.721336  491055 host.go:66] Checking if "ingress-addon-legacy-570599" exists ...
	I0116 02:47:44.721704  491055 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570599 --format={{.State.Status}}
	I0116 02:47:44.721705  491055 kapi.go:59] client config for ingress-addon-legacy-570599: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.key", CAFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:47:44.721863  491055 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570599 --format={{.State.Status}}
	I0116 02:47:44.722877  491055 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 02:47:44.752097  491055 kapi.go:59] client config for ingress-addon-legacy-570599: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.key", CAFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:47:44.752422  491055 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-570599"
	I0116 02:47:44.752463  491055 host.go:66] Checking if "ingress-addon-legacy-570599" exists ...
	I0116 02:47:44.752866  491055 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570599 --format={{.State.Status}}
	I0116 02:47:44.754914  491055 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:47:44.756239  491055 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:47:44.756275  491055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:47:44.756333  491055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570599
	I0116 02:47:44.769139  491055 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:47:44.769170  491055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:47:44.769226  491055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570599
	I0116 02:47:44.772577  491055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33222 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/ingress-addon-legacy-570599/id_rsa Username:docker}
	I0116 02:47:44.784881  491055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33222 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/ingress-addon-legacy-570599/id_rsa Username:docker}
	I0116 02:47:45.011446  491055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:47:45.022618  491055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:47:45.024461  491055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:47:45.303483  491055 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-570599" context rescaled to 1 replicas
	I0116 02:47:45.303549  491055 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:47:45.305222  491055 out.go:177] * Verifying Kubernetes components...
	I0116 02:47:45.307026  491055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:47:45.519057  491055 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0116 02:47:45.607108  491055 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0116 02:47:45.608545  491055 addons.go:505] enable addons completed in 887.519871ms: enabled=[default-storageclass storage-provisioner]
	I0116 02:47:45.606334  491055 kapi.go:59] client config for ingress-addon-legacy-570599: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.key", CAFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:47:45.608902  491055 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-570599" to be "Ready" ...
	I0116 02:47:47.612722  491055 node_ready.go:58] node "ingress-addon-legacy-570599" has status "Ready":"False"
	I0116 02:47:50.255820  491055 node_ready.go:58] node "ingress-addon-legacy-570599" has status "Ready":"False"
	I0116 02:47:52.612151  491055 node_ready.go:58] node "ingress-addon-legacy-570599" has status "Ready":"False"
	I0116 02:47:54.612361  491055 node_ready.go:58] node "ingress-addon-legacy-570599" has status "Ready":"False"
	I0116 02:47:56.612674  491055 node_ready.go:58] node "ingress-addon-legacy-570599" has status "Ready":"False"
	I0116 02:47:58.613003  491055 node_ready.go:58] node "ingress-addon-legacy-570599" has status "Ready":"False"
	I0116 02:48:00.112796  491055 node_ready.go:49] node "ingress-addon-legacy-570599" has status "Ready":"True"
	I0116 02:48:00.112823  491055 node_ready.go:38] duration metric: took 14.503893383s waiting for node "ingress-addon-legacy-570599" to be "Ready" ...
	I0116 02:48:00.112834  491055 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:48:00.119125  491055 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-q48mq" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:02.122774  491055 pod_ready.go:102] pod "coredns-66bff467f8-q48mq" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 02:47:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0116 02:48:04.122861  491055 pod_ready.go:102] pod "coredns-66bff467f8-q48mq" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 02:47:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0116 02:48:06.124531  491055 pod_ready.go:102] pod "coredns-66bff467f8-q48mq" in "kube-system" namespace has status "Ready":"False"
	I0116 02:48:08.124827  491055 pod_ready.go:102] pod "coredns-66bff467f8-q48mq" in "kube-system" namespace has status "Ready":"False"
	I0116 02:48:10.125198  491055 pod_ready.go:102] pod "coredns-66bff467f8-q48mq" in "kube-system" namespace has status "Ready":"False"
	I0116 02:48:12.625031  491055 pod_ready.go:102] pod "coredns-66bff467f8-q48mq" in "kube-system" namespace has status "Ready":"False"
	I0116 02:48:15.124746  491055 pod_ready.go:92] pod "coredns-66bff467f8-q48mq" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:15.124770  491055 pod_ready.go:81] duration metric: took 15.005614225s waiting for pod "coredns-66bff467f8-q48mq" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:15.124779  491055 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-570599" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:15.128609  491055 pod_ready.go:92] pod "etcd-ingress-addon-legacy-570599" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:15.128633  491055 pod_ready.go:81] duration metric: took 3.844173ms waiting for pod "etcd-ingress-addon-legacy-570599" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:15.128644  491055 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-570599" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:15.132622  491055 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-570599" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:15.132643  491055 pod_ready.go:81] duration metric: took 3.991971ms waiting for pod "kube-apiserver-ingress-addon-legacy-570599" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:15.132655  491055 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-570599" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:15.136208  491055 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-570599" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:15.136225  491055 pod_ready.go:81] duration metric: took 3.562726ms waiting for pod "kube-controller-manager-ingress-addon-legacy-570599" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:15.136233  491055 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gpfwn" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:15.139929  491055 pod_ready.go:92] pod "kube-proxy-gpfwn" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:15.139946  491055 pod_ready.go:81] duration metric: took 3.707737ms waiting for pod "kube-proxy-gpfwn" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:15.139954  491055 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-570599" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:15.320333  491055 request.go:629] Waited for 180.321043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-570599
	I0116 02:48:15.520322  491055 request.go:629] Waited for 197.325152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-570599
	I0116 02:48:15.522792  491055 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-570599" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:15.522816  491055 pod_ready.go:81] duration metric: took 382.85481ms waiting for pod "kube-scheduler-ingress-addon-legacy-570599" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:15.522830  491055 pod_ready.go:38] duration metric: took 15.409983638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:48:15.522852  491055 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:48:15.522921  491055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:48:15.533640  491055 api_server.go:72] duration metric: took 30.230057141s to wait for apiserver process to appear ...
	I0116 02:48:15.533660  491055 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:48:15.533678  491055 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0116 02:48:15.538060  491055 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0116 02:48:15.538874  491055 api_server.go:141] control plane version: v1.18.20
	I0116 02:48:15.538898  491055 api_server.go:131] duration metric: took 5.231728ms to wait for apiserver health ...
	I0116 02:48:15.538905  491055 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:48:15.720329  491055 request.go:629] Waited for 181.352512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0116 02:48:15.725477  491055 system_pods.go:59] 8 kube-system pods found
	I0116 02:48:15.725519  491055 system_pods.go:61] "coredns-66bff467f8-q48mq" [d6740c90-2ec4-464f-981d-d9fb5cfd2b47] Running
	I0116 02:48:15.725530  491055 system_pods.go:61] "etcd-ingress-addon-legacy-570599" [2d427ec9-6cc1-46dc-b2e6-5d45f049a119] Running
	I0116 02:48:15.725536  491055 system_pods.go:61] "kindnet-wdgtd" [040ffd72-5718-4818-8be8-235ef4852ae8] Running
	I0116 02:48:15.725549  491055 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-570599" [f8b31122-254c-49ba-b27f-48ce2aaa2c83] Running
	I0116 02:48:15.725559  491055 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-570599" [aefc835d-0894-4843-a77f-411406f5838d] Running
	I0116 02:48:15.725568  491055 system_pods.go:61] "kube-proxy-gpfwn" [1a6a3140-e6fd-4eb1-8ee8-8314317d583d] Running
	I0116 02:48:15.725575  491055 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-570599" [4b635a7c-6f57-4a20-abe6-adf1dcb17d00] Running
	I0116 02:48:15.725582  491055 system_pods.go:61] "storage-provisioner" [94f17543-119b-4056-b9d8-c93c2f2b03f5] Running
	I0116 02:48:15.725592  491055 system_pods.go:74] duration metric: took 186.678687ms to wait for pod list to return data ...
	I0116 02:48:15.725605  491055 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:48:15.919974  491055 request.go:629] Waited for 194.268641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:48:15.922409  491055 default_sa.go:45] found service account: "default"
	I0116 02:48:15.922442  491055 default_sa.go:55] duration metric: took 196.823652ms for default service account to be created ...
	I0116 02:48:15.922454  491055 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:48:16.120882  491055 request.go:629] Waited for 198.350041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0116 02:48:16.126701  491055 system_pods.go:86] 8 kube-system pods found
	I0116 02:48:16.126730  491055 system_pods.go:89] "coredns-66bff467f8-q48mq" [d6740c90-2ec4-464f-981d-d9fb5cfd2b47] Running
	I0116 02:48:16.126742  491055 system_pods.go:89] "etcd-ingress-addon-legacy-570599" [2d427ec9-6cc1-46dc-b2e6-5d45f049a119] Running
	I0116 02:48:16.126748  491055 system_pods.go:89] "kindnet-wdgtd" [040ffd72-5718-4818-8be8-235ef4852ae8] Running
	I0116 02:48:16.126755  491055 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-570599" [f8b31122-254c-49ba-b27f-48ce2aaa2c83] Running
	I0116 02:48:16.126763  491055 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-570599" [aefc835d-0894-4843-a77f-411406f5838d] Running
	I0116 02:48:16.126769  491055 system_pods.go:89] "kube-proxy-gpfwn" [1a6a3140-e6fd-4eb1-8ee8-8314317d583d] Running
	I0116 02:48:16.126776  491055 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-570599" [4b635a7c-6f57-4a20-abe6-adf1dcb17d00] Running
	I0116 02:48:16.126792  491055 system_pods.go:89] "storage-provisioner" [94f17543-119b-4056-b9d8-c93c2f2b03f5] Running
	I0116 02:48:16.126810  491055 system_pods.go:126] duration metric: took 204.343512ms to wait for k8s-apps to be running ...
	I0116 02:48:16.126824  491055 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:48:16.126885  491055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:48:16.139064  491055 system_svc.go:56] duration metric: took 12.23024ms WaitForService to wait for kubelet.
	I0116 02:48:16.139097  491055 kubeadm.go:581] duration metric: took 30.835517048s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:48:16.139120  491055 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:48:16.320514  491055 request.go:629] Waited for 181.315816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0116 02:48:16.323392  491055 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0116 02:48:16.323425  491055 node_conditions.go:123] node cpu capacity is 8
	I0116 02:48:16.323440  491055 node_conditions.go:105] duration metric: took 184.314322ms to run NodePressure ...
	I0116 02:48:16.323457  491055 start.go:228] waiting for startup goroutines ...
	I0116 02:48:16.323468  491055 start.go:233] waiting for cluster config update ...
	I0116 02:48:16.323480  491055 start.go:242] writing updated cluster config ...
	I0116 02:48:16.323820  491055 ssh_runner.go:195] Run: rm -f paused
	I0116 02:48:16.371107  491055 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0116 02:48:16.374245  491055 out.go:177] 
	W0116 02:48:16.375726  491055 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0116 02:48:16.377055  491055 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0116 02:48:16.378422  491055 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-570599" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 16 02:51:25 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:25.327815214Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-5nqdw from CNI network \"kindnet\" (type=ptp)"
	Jan 16 02:51:25 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:25.357596430Z" level=info msg="Stopped pod sandbox: 507db6ead4b0337ddb13c38e2aa88d060a4d9218bcfe6b3087649aab3d73e7f0" id=c97d90bd-2e59-4b82-966d-2af22482caac name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 02:51:25 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:25.357700852Z" level=info msg="Stopped pod sandbox (already stopped): 507db6ead4b0337ddb13c38e2aa88d060a4d9218bcfe6b3087649aab3d73e7f0" id=16f40d70-9a8f-4fbc-8bc3-6b64127bfc5e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.918687827Z" level=info msg="Removing container: cb4564f10695ae155b1bff885cd68e706bc6c66be62666c8625a7b61eb37c450" id=860e3b72-5a1d-4276-bda0-accdbc00fcf2 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.932313905Z" level=info msg="Removed container cb4564f10695ae155b1bff885cd68e706bc6c66be62666c8625a7b61eb37c450: ingress-nginx/ingress-nginx-admission-create-kfmjf/create" id=860e3b72-5a1d-4276-bda0-accdbc00fcf2 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.933728279Z" level=info msg="Removing container: 5d22a4f0c39a84743122792a34e06e5e1ed8db4a18998641bee94e7176caa74f" id=28de70fe-f6ca-4db9-a436-4e3f26ff0ca5 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.947492872Z" level=info msg="Removed container 5d22a4f0c39a84743122792a34e06e5e1ed8db4a18998641bee94e7176caa74f: ingress-nginx/ingress-nginx-controller-7fcf777cb7-5nqdw/controller" id=28de70fe-f6ca-4db9-a436-4e3f26ff0ca5 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.948802533Z" level=info msg="Removing container: a4271485bd1fbd9a56f35b70e225ad14e0558e70c82265e22d8775af205fb08e" id=6c51ce78-c6ae-4bbb-96f1-003366493826 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.962675809Z" level=info msg="Removed container a4271485bd1fbd9a56f35b70e225ad14e0558e70c82265e22d8775af205fb08e: ingress-nginx/ingress-nginx-admission-patch-tt4p9/patch" id=6c51ce78-c6ae-4bbb-96f1-003366493826 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.963894429Z" level=info msg="Stopping pod sandbox: b62b5a7db1f058f22a964580aabb78882fa3a25bf204928fad2adba71cf77628" id=8341e9bd-e782-44f0-9e56-9690ace8b2d3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.963934467Z" level=info msg="Stopped pod sandbox (already stopped): b62b5a7db1f058f22a964580aabb78882fa3a25bf204928fad2adba71cf77628" id=8341e9bd-e782-44f0-9e56-9690ace8b2d3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.964284838Z" level=info msg="Removing pod sandbox: b62b5a7db1f058f22a964580aabb78882fa3a25bf204928fad2adba71cf77628" id=ae59925f-6a99-4188-a0cc-2ee0672b7dfa name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.969203492Z" level=info msg="Removed pod sandbox: b62b5a7db1f058f22a964580aabb78882fa3a25bf204928fad2adba71cf77628" id=ae59925f-6a99-4188-a0cc-2ee0672b7dfa name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.969665297Z" level=info msg="Stopping pod sandbox: 4bd2d70d4557a0f600688954f216684988e3e0a987081faa37ad4edf6b6e529e" id=96d00775-cf6b-4ad7-a6ec-bd7ff3089f83 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.969702163Z" level=info msg="Stopped pod sandbox (already stopped): 4bd2d70d4557a0f600688954f216684988e3e0a987081faa37ad4edf6b6e529e" id=96d00775-cf6b-4ad7-a6ec-bd7ff3089f83 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.969966949Z" level=info msg="Removing pod sandbox: 4bd2d70d4557a0f600688954f216684988e3e0a987081faa37ad4edf6b6e529e" id=252927f6-8522-4783-ba84-e0fe76804670 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.975771032Z" level=info msg="Removed pod sandbox: 4bd2d70d4557a0f600688954f216684988e3e0a987081faa37ad4edf6b6e529e" id=252927f6-8522-4783-ba84-e0fe76804670 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.976066491Z" level=info msg="Stopping pod sandbox: 645a49c42d1f33878aac76e499c36b3e4f1f303c78ce81dd886fe927a5b45700" id=c1854477-be24-4015-a495-9643ac59f398 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.976095309Z" level=info msg="Stopped pod sandbox (already stopped): 645a49c42d1f33878aac76e499c36b3e4f1f303c78ce81dd886fe927a5b45700" id=c1854477-be24-4015-a495-9643ac59f398 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.976323257Z" level=info msg="Removing pod sandbox: 645a49c42d1f33878aac76e499c36b3e4f1f303c78ce81dd886fe927a5b45700" id=c1f9cd77-8f00-48e2-bde0-cef83c9ced9c name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.980841777Z" level=info msg="Removed pod sandbox: 645a49c42d1f33878aac76e499c36b3e4f1f303c78ce81dd886fe927a5b45700" id=c1f9cd77-8f00-48e2-bde0-cef83c9ced9c name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.981254711Z" level=info msg="Stopping pod sandbox: 507db6ead4b0337ddb13c38e2aa88d060a4d9218bcfe6b3087649aab3d73e7f0" id=9c2047fc-a28a-431c-b31f-ad6524e16990 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.981291553Z" level=info msg="Stopped pod sandbox (already stopped): 507db6ead4b0337ddb13c38e2aa88d060a4d9218bcfe6b3087649aab3d73e7f0" id=9c2047fc-a28a-431c-b31f-ad6524e16990 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.981545155Z" level=info msg="Removing pod sandbox: 507db6ead4b0337ddb13c38e2aa88d060a4d9218bcfe6b3087649aab3d73e7f0" id=78109b0c-c806-44a5-b138-bad6a39e0d34 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jan 16 02:51:29 ingress-addon-legacy-570599 crio[955]: time="2024-01-16 02:51:29.986317848Z" level=info msg="Removed pod sandbox: 507db6ead4b0337ddb13c38e2aa88d060a4d9218bcfe6b3087649aab3d73e7f0" id=78109b0c-c806-44a5-b138-bad6a39e0d34 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                     CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f5aee383f94a0       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7   21 seconds ago      Running             hello-world-app           0                   36b1569f2cbce       hello-world-app-5f5d8b66bb-p65ft
	d818ac3daa904       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686           2 minutes ago       Running             nginx                     0                   06b22aa23f1ff       nginx
	313f14234523c       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                          3 minutes ago       Running             coredns                   0                   9204220f2c8cb       coredns-66bff467f8-q48mq
	74db31c402802       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                          3 minutes ago       Running             storage-provisioner       0                   dec98580d4fb9       storage-provisioner
	93282fec1179f       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052        3 minutes ago       Running             kindnet-cni               0                   de90eacc76e15       kindnet-wdgtd
	edb1a7a04b0d7       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                          3 minutes ago       Running             kube-proxy                0                   b6376181bc9a6       kube-proxy-gpfwn
	ed402a3b5a503       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                          4 minutes ago       Running             kube-controller-manager   0                   c9d95c2dab08c       kube-controller-manager-ingress-addon-legacy-570599
	30cf0a6460d23       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                          4 minutes ago       Running             kube-apiserver            0                   01280f26714e9       kube-apiserver-ingress-addon-legacy-570599
	1ae4b191049dc       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                          4 minutes ago       Running             kube-scheduler            0                   14f1943a38935       kube-scheduler-ingress-addon-legacy-570599
	2630485ae825f       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                          4 minutes ago       Running             etcd                      0                   5284b05b27356       etcd-ingress-addon-legacy-570599
	
	
	==> coredns [313f14234523c6933a17424e9559bf1a8e6e7f11f44deb76065cc873123c264a] <==
	[INFO] 10.244.0.5:36573 - 6190 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003134589s
	[INFO] 10.244.0.5:39595 - 49300 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002684212s
	[INFO] 10.244.0.5:34644 - 34160 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003215163s
	[INFO] 10.244.0.5:36573 - 65451 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002499159s
	[INFO] 10.244.0.5:45161 - 19738 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003239545s
	[INFO] 10.244.0.5:50776 - 14271 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003228186s
	[INFO] 10.244.0.5:50057 - 1616 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003355649s
	[INFO] 10.244.0.5:56909 - 3111 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003046172s
	[INFO] 10.244.0.5:56673 - 47207 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003241591s
	[INFO] 10.244.0.5:39595 - 10384 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003644076s
	[INFO] 10.244.0.5:36573 - 35539 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003598991s
	[INFO] 10.244.0.5:45161 - 150 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003605023s
	[INFO] 10.244.0.5:50776 - 9051 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003604937s
	[INFO] 10.244.0.5:34644 - 56896 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003745308s
	[INFO] 10.244.0.5:50057 - 20170 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003110536s
	[INFO] 10.244.0.5:56909 - 49148 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003128902s
	[INFO] 10.244.0.5:39595 - 23946 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000161021s
	[INFO] 10.244.0.5:36573 - 26149 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060236s
	[INFO] 10.244.0.5:56673 - 31457 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003576071s
	[INFO] 10.244.0.5:45161 - 41111 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000119398s
	[INFO] 10.244.0.5:50776 - 26796 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00007399s
	[INFO] 10.244.0.5:50057 - 55884 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057675s
	[INFO] 10.244.0.5:56909 - 53998 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058986s
	[INFO] 10.244.0.5:34644 - 11530 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000158598s
	[INFO] 10.244.0.5:56673 - 64180 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056733s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-570599
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-570599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=ingress-addon-legacy-570599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_47_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:47:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-570599
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:51:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:51:30 +0000   Tue, 16 Jan 2024 02:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:51:30 +0000   Tue, 16 Jan 2024 02:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:51:30 +0000   Tue, 16 Jan 2024 02:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:51:30 +0000   Tue, 16 Jan 2024 02:48:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-570599
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 4354b5cbb59347ed84c3d4807cdced9a
	  System UUID:                dd4bb786-d627-4451-944d-4ac3cc4936c1
	  Boot ID:                    cc6eb99d-2787-4545-a9c9-22d5006806a3
	  Kernel Version:             5.15.0-1048-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-p65ft                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-q48mq                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m46s
	  kube-system                 etcd-ingress-addon-legacy-570599                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kindnet-wdgtd                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m46s
	  kube-system                 kube-apiserver-ingress-addon-legacy-570599             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-controller-manager-ingress-addon-legacy-570599    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-proxy-gpfwn                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 kube-scheduler-ingress-addon-legacy-570599             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-570599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-570599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x4 over 4m9s)  kubelet     Node ingress-addon-legacy-570599 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m1s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m                   kubelet     Node ingress-addon-legacy-570599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m                   kubelet     Node ingress-addon-legacy-570599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m                   kubelet     Node ingress-addon-legacy-570599 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m45s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m30s                kubelet     Node ingress-addon-legacy-570599 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.007353] FS-Cache: O-key=[8] 'd7a20f0200000000'
	[  +0.004940] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007945] FS-Cache: N-cookie d=00000000f7250940{9p.inode} n=000000006b0f1592
	[  +0.007364] FS-Cache: N-key=[8] 'd7a20f0200000000'
	[  +0.285216] FS-Cache: Duplicate cookie detected
	[  +0.004716] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.006747] FS-Cache: O-cookie d=00000000f7250940{9p.inode} n=00000000fab5c785
	[  +0.007358] FS-Cache: O-key=[8] 'dda20f0200000000'
	[  +0.004971] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007930] FS-Cache: N-cookie d=00000000f7250940{9p.inode} n=0000000041298e86
	[  +0.008749] FS-Cache: N-key=[8] 'dda20f0200000000'
	[Jan16 02:48] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[  +1.011963] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[  +2.015838] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[Jan16 02:49] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[  +8.191334] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[ +16.126801] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[ +33.021533] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	
	
	==> etcd [2630485ae825fb5867ef41172c0c94b6e63d497856ec682fac9bbf057fb39e4b] <==
	raft2024/01/16 02:47:22 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-16 02:47:22.535600 W | auth: simple token is not cryptographically signed
	2024-01-16 02:47:22.538379 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-16 02:47:22.601005 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/16 02:47:22 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-16 02:47:22.601722 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-16 02:47:22.602206 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 02:47:22.602324 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-16 02:47:22.602460 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/16 02:47:23 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/16 02:47:23 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/16 02:47:23 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/16 02:47:23 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/16 02:47:23 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-16 02:47:23.331489 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-16 02:47:23.331567 I | etcdserver: published {Name:ingress-addon-legacy-570599 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-16 02:47:23.331585 I | embed: ready to serve client requests
	2024-01-16 02:47:23.331674 I | embed: ready to serve client requests
	2024-01-16 02:47:23.332188 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-16 02:47:23.332331 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-16 02:47:23.333125 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-16 02:47:23.333406 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-16 02:47:44.603646 W | etcdserver: request "header:<ID:8128026528070280545 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/ingress-addon-legacy-570599.17aab3f2d3642252\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/ingress-addon-legacy-570599.17aab3f2d3642252\" value_size:668 lease:8128026528070280175 >> failure:<>>" with result "size:16" took too long (102.076067ms) to execute
	2024-01-16 02:47:50.253931 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-570599\" " with result "range_response_count:1 size:6604" took too long (142.576762ms) to execute
	2024-01-16 02:47:50.450503 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-ingress-addon-legacy-570599\" " with result "range_response_count:1 size:6682" took too long (188.486592ms) to execute
	
	
	==> kernel <==
	 02:51:30 up  2:33,  0 users,  load average: 0.12, 0.65, 1.22
	Linux ingress-addon-legacy-570599 5.15.0-1048-gcp #56~20.04.1-Ubuntu SMP Fri Nov 24 16:52:37 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [93282fec1179f8a104a75ebfedb85fc89b2a316fc9b9efdc848fc4064a95c37e] <==
	I0116 02:49:21.366711       1 main.go:227] handling current node
	I0116 02:49:31.378862       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:49:31.378887       1 main.go:227] handling current node
	I0116 02:49:41.382436       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:49:41.382462       1 main.go:227] handling current node
	I0116 02:49:51.386328       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:49:51.386354       1 main.go:227] handling current node
	I0116 02:50:01.390684       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:50:01.390711       1 main.go:227] handling current node
	I0116 02:50:11.402852       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:50:11.402879       1 main.go:227] handling current node
	I0116 02:50:21.407048       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:50:21.407074       1 main.go:227] handling current node
	I0116 02:50:31.419151       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:50:31.419176       1 main.go:227] handling current node
	I0116 02:50:41.423283       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:50:41.423313       1 main.go:227] handling current node
	I0116 02:50:51.435080       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:50:51.435104       1 main.go:227] handling current node
	I0116 02:51:01.438181       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:51:01.438208       1 main.go:227] handling current node
	I0116 02:51:11.451136       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:51:11.451161       1 main.go:227] handling current node
	I0116 02:51:21.460159       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:51:21.460182       1 main.go:227] handling current node
	
	
	==> kube-apiserver [30cf0a6460d235a617a9dbccacaae9b2d662a5851fe0f62dfa060ac966ddd37c] <==
	I0116 02:47:26.659118       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0116 02:47:26.659798       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0116 02:47:26.757676       1 cache.go:39] Caches are synced for autoregister controller
	I0116 02:47:26.757945       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0116 02:47:26.758195       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0116 02:47:26.758708       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 02:47:26.759151       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0116 02:47:27.656753       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0116 02:47:27.656774       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0116 02:47:27.662424       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0116 02:47:27.667913       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0116 02:47:27.667936       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0116 02:47:27.935710       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 02:47:27.965992       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0116 02:47:28.030547       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0116 02:47:28.031322       1 controller.go:609] quota admission added evaluator for: endpoints
	I0116 02:47:28.034318       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 02:47:28.970886       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0116 02:47:29.584488       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0116 02:47:29.765635       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0116 02:47:29.922924       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 02:47:44.392743       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0116 02:47:44.395669       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0116 02:48:17.039027       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0116 02:48:45.017939       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [ed402a3b5a5036df587051b66d9598458206775ee8e6575e64ca4c036d8609a3] <==
	I0116 02:47:44.618218       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"feefb6a5-0041-4c47-aa71-d5adb3fad65e", APIVersion:"apps/v1", ResourceVersion:"322", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-q48mq
	I0116 02:47:44.709838       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0116 02:47:44.722018       1 shared_informer.go:230] Caches are synced for attach detach 
	I0116 02:47:44.740431       1 shared_informer.go:230] Caches are synced for PV protection 
	I0116 02:47:44.747020       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"e24a9734-58f0-42ed-89bf-eb93b19f1767", APIVersion:"apps/v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0116 02:47:44.800409       1 shared_informer.go:230] Caches are synced for expand 
	I0116 02:47:44.800443       1 shared_informer.go:230] Caches are synced for endpoint 
	I0116 02:47:44.809817       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"feefb6a5-0041-4c47-aa71-d5adb3fad65e", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-8bgtk
	I0116 02:47:44.901225       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0116 02:47:44.901242       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0116 02:47:44.930268       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 02:47:45.000458       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 02:47:45.000497       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0116 02:47:45.000671       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 02:47:45.028298       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 02:48:04.424414       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0116 02:48:17.031401       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"4c3025ec-89fe-43ae-a04e-c377ec34a6c8", APIVersion:"apps/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0116 02:48:17.038002       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"06aaf1e2-8063-4525-9bcc-20dc32eb0b0d", APIVersion:"apps/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-5nqdw
	I0116 02:48:17.106751       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"9f7e85f8-6023-4d68-95a9-be740f2b6145", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-kfmjf
	I0116 02:48:17.120588       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"2f26b8fb-80dc-46ae-b7ce-40c81ab168ff", APIVersion:"batch/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-tt4p9
	I0116 02:48:22.118670       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"9f7e85f8-6023-4d68-95a9-be740f2b6145", APIVersion:"batch/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 02:48:23.121009       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"2f26b8fb-80dc-46ae-b7ce-40c81ab168ff", APIVersion:"batch/v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 02:51:05.623552       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"4abea14c-e8a3-4f7d-9140-91b33881f6db", APIVersion:"apps/v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0116 02:51:05.629423       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"a046f202-2232-40f9-b463-7487c7a7c38c", APIVersion:"apps/v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-p65ft
	E0116 02:51:27.940241       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-rvhjg" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [edb1a7a04b0d78516c51847511e8b530f63f824b61f37684c04f4cb167fe0043] <==
	W0116 02:47:45.214179       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0116 02:47:45.221227       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0116 02:47:45.221258       1 server_others.go:186] Using iptables Proxier.
	I0116 02:47:45.221494       1 server.go:583] Version: v1.18.20
	I0116 02:47:45.222438       1 config.go:133] Starting endpoints config controller
	I0116 02:47:45.222465       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0116 02:47:45.222610       1 config.go:315] Starting service config controller
	I0116 02:47:45.222654       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0116 02:47:45.322630       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0116 02:47:45.322786       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [1ae4b191049dcd88e35c10a2dd4a46f5cc7538c6979c9f2957ab6a21b90ccb18] <==
	W0116 02:47:26.706203       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 02:47:26.706253       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 02:47:26.717942       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0116 02:47:26.717967       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0116 02:47:26.720606       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 02:47:26.720628       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 02:47:26.721005       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0116 02:47:26.721059       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0116 02:47:26.722037       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:47:26.723143       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 02:47:26.723336       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:47:26.723377       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 02:47:26.723538       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:47:26.723592       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:47:26.723725       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:47:26.723892       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:47:26.723371       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:47:26.723377       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:47:26.723624       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 02:47:26.723736       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 02:47:27.555744       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 02:47:27.684415       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:47:27.804198       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0116 02:47:28.220788       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0116 02:47:44.623397       1 factory.go:503] pod: kube-system/coredns-66bff467f8-8bgtk is already present in the active queue
	
	
	==> kubelet <==
	Jan 16 02:51:20 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:20.932117    1858 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 16 02:51:20 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:20.932155    1858 pod_workers.go:191] Error syncing pod ad734cbd-cc7b-4929-b57a-e7f132c5953a ("kube-ingress-dns-minikube_kube-system(ad734cbd-cc7b-4929-b57a-e7f132c5953a)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 16 02:51:21 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:21.374674    1858 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-jrtjz" (UniqueName: "kubernetes.io/secret/ad734cbd-cc7b-4929-b57a-e7f132c5953a-minikube-ingress-dns-token-jrtjz") pod "ad734cbd-cc7b-4929-b57a-e7f132c5953a" (UID: "ad734cbd-cc7b-4929-b57a-e7f132c5953a")
	Jan 16 02:51:21 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:21.376802    1858 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad734cbd-cc7b-4929-b57a-e7f132c5953a-minikube-ingress-dns-token-jrtjz" (OuterVolumeSpecName: "minikube-ingress-dns-token-jrtjz") pod "ad734cbd-cc7b-4929-b57a-e7f132c5953a" (UID: "ad734cbd-cc7b-4929-b57a-e7f132c5953a"). InnerVolumeSpecName "minikube-ingress-dns-token-jrtjz". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:51:21 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:21.475030    1858 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-jrtjz" (UniqueName: "kubernetes.io/secret/ad734cbd-cc7b-4929-b57a-e7f132c5953a-minikube-ingress-dns-token-jrtjz") on node "ingress-addon-legacy-570599" DevicePath ""
	Jan 16 02:51:23 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:23.168623    1858 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5nqdw.17aab425c17b194c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5nqdw", UID:"f1655333-7db0-4d19-b9ab-30f8ab071bdf", APIVersion:"v1", ResourceVersion:"489", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-570599"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16199aac9e52b4c, ext:233615665559, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16199aac9e52b4c, ext:233615665559, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5nqdw.17aab425c17b194c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 02:51:23 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:23.171558    1858 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5nqdw.17aab425c17b194c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5nqdw", UID:"f1655333-7db0-4d19-b9ab-30f8ab071bdf", APIVersion:"v1", ResourceVersion:"489", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-570599"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16199aac9e52b4c, ext:233615665559, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16199aaca096637, ext:233618039946, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5nqdw.17aab425c17b194c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 02:51:25 ingress-addon-legacy-570599 kubelet[1858]: W0116 02:51:25.385647    1858 pod_container_deletor.go:77] Container "507db6ead4b0337ddb13c38e2aa88d060a4d9218bcfe6b3087649aab3d73e7f0" not found in pod's containers
	Jan 16 02:51:25 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:25.506409    1858 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-hnktb" (UniqueName: "kubernetes.io/secret/f1655333-7db0-4d19-b9ab-30f8ab071bdf-ingress-nginx-token-hnktb") pod "f1655333-7db0-4d19-b9ab-30f8ab071bdf" (UID: "f1655333-7db0-4d19-b9ab-30f8ab071bdf")
	Jan 16 02:51:25 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:25.506464    1858 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f1655333-7db0-4d19-b9ab-30f8ab071bdf-webhook-cert") pod "f1655333-7db0-4d19-b9ab-30f8ab071bdf" (UID: "f1655333-7db0-4d19-b9ab-30f8ab071bdf")
	Jan 16 02:51:25 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:25.508400    1858 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1655333-7db0-4d19-b9ab-30f8ab071bdf-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f1655333-7db0-4d19-b9ab-30f8ab071bdf" (UID: "f1655333-7db0-4d19-b9ab-30f8ab071bdf"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:51:25 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:25.508520    1858 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1655333-7db0-4d19-b9ab-30f8ab071bdf-ingress-nginx-token-hnktb" (OuterVolumeSpecName: "ingress-nginx-token-hnktb") pod "f1655333-7db0-4d19-b9ab-30f8ab071bdf" (UID: "f1655333-7db0-4d19-b9ab-30f8ab071bdf"). InnerVolumeSpecName "ingress-nginx-token-hnktb". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:51:25 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:25.606778    1858 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f1655333-7db0-4d19-b9ab-30f8ab071bdf-webhook-cert") on node "ingress-addon-legacy-570599" DevicePath ""
	Jan 16 02:51:25 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:25.606818    1858 reconciler.go:319] Volume detached for volume "ingress-nginx-token-hnktb" (UniqueName: "kubernetes.io/secret/f1655333-7db0-4d19-b9ab-30f8ab071bdf-ingress-nginx-token-hnktb") on node "ingress-addon-legacy-570599" DevicePath ""
	Jan 16 02:51:29 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:29.917457    1858 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: cb4564f10695ae155b1bff885cd68e706bc6c66be62666c8625a7b61eb37c450
	Jan 16 02:51:29 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:29.932637    1858 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5d22a4f0c39a84743122792a34e06e5e1ed8db4a18998641bee94e7176caa74f
	Jan 16 02:51:29 ingress-addon-legacy-570599 kubelet[1858]: I0116 02:51:29.947736    1858 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a4271485bd1fbd9a56f35b70e225ad14e0558e70c82265e22d8775af205fb08e
	Jan 16 02:51:30 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:30.050955    1858 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fc9fa569f3abac843a875c84fa3a2b6ff56e3a7f50bf80ba69d78792e61dbc1c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fc9fa569f3abac843a875c84fa3a2b6ff56e3a7f50bf80ba69d78792e61dbc1c/diff: no such file or directory, extraDiskErr: <nil>
	Jan 16 02:51:30 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:30.050959    1858 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fc9fa569f3abac843a875c84fa3a2b6ff56e3a7f50bf80ba69d78792e61dbc1c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fc9fa569f3abac843a875c84fa3a2b6ff56e3a7f50bf80ba69d78792e61dbc1c/diff: no such file or directory, extraDiskErr: <nil>
	Jan 16 02:51:30 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:30.057270    1858 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0dc95ffbbdfa090721e389b341e7b3506fd68e704271a11c93e8142bc7f5a930/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0dc95ffbbdfa090721e389b341e7b3506fd68e704271a11c93e8142bc7f5a930/diff: no such file or directory, extraDiskErr: <nil>
	Jan 16 02:51:30 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:30.059275    1858 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0734a2abe886add6238314cf60ec1cb242617b68ff66e98b0113a2cef18953f1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0734a2abe886add6238314cf60ec1cb242617b68ff66e98b0113a2cef18953f1/diff: no such file or directory, extraDiskErr: <nil>
	Jan 16 02:51:30 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:30.060150    1858 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0734a2abe886add6238314cf60ec1cb242617b68ff66e98b0113a2cef18953f1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0734a2abe886add6238314cf60ec1cb242617b68ff66e98b0113a2cef18953f1/diff: no such file or directory, extraDiskErr: <nil>
	Jan 16 02:51:30 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:30.064491    1858 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/10dc22edbe2bba1707bb981aad960d795def05849347a478b60ee7962d0447f7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/10dc22edbe2bba1707bb981aad960d795def05849347a478b60ee7962d0447f7/diff: no such file or directory, extraDiskErr: <nil>
	Jan 16 02:51:30 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:30.068217    1858 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0dc95ffbbdfa090721e389b341e7b3506fd68e704271a11c93e8142bc7f5a930/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0dc95ffbbdfa090721e389b341e7b3506fd68e704271a11c93e8142bc7f5a930/diff: no such file or directory, extraDiskErr: <nil>
	Jan 16 02:51:30 ingress-addon-legacy-570599 kubelet[1858]: E0116 02:51:30.073696    1858 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/10dc22edbe2bba1707bb981aad960d795def05849347a478b60ee7962d0447f7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/10dc22edbe2bba1707bb981aad960d795def05849347a478b60ee7962d0447f7/diff: no such file or directory, extraDiskErr: <nil>
	
	
	==> storage-provisioner [74db31c402802fd4765c32dadfb46c4ababb3a4171431c0b10d2742235e423dd] <==
	I0116 02:48:04.855781       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 02:48:04.865812       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 02:48:04.865871       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 02:48:04.900885       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 02:48:04.901006       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df7556f3-24ea-4cdc-b491-4685ecf620f8", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-570599_f1cff185-fb56-41fd-bf26-874b79cf4177 became leader
	I0116 02:48:04.901105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-570599_f1cff185-fb56-41fd-bf26-874b79cf4177!
	I0116 02:48:05.002200       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-570599_f1cff185-fb56-41fd-bf26-874b79cf4177!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-570599 -n ingress-addon-legacy-570599
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-570599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (180.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-4dmmg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-4dmmg -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-4dmmg -- sh -c "ping -c 1 192.168.58.1": exit status 1 (159.818452ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-4dmmg): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-hwz9l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-hwz9l -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-hwz9l -- sh -c "ping -c 1 192.168.58.1": exit status 1 (161.948241ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-hwz9l): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-061156
helpers_test.go:235: (dbg) docker inspect multinode-061156:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81",
	        "Created": "2024-01-16T02:56:37.697017294Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 536984,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T02:56:37.945991497Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81/hostname",
	        "HostsPath": "/var/lib/docker/containers/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81/hosts",
	        "LogPath": "/var/lib/docker/containers/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81-json.log",
	        "Name": "/multinode-061156",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-061156:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-061156",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2ad4c5bc0cd05cb480ee8cf4c6e5e727f818e2469843e9cf115c1acd8568d56d-init/diff:/var/lib/docker/overlay2/bba00fb4c7e32355be8b1614d52104fcb5f09794e9ed4467560e2767dcfd351b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ad4c5bc0cd05cb480ee8cf4c6e5e727f818e2469843e9cf115c1acd8568d56d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ad4c5bc0cd05cb480ee8cf4c6e5e727f818e2469843e9cf115c1acd8568d56d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ad4c5bc0cd05cb480ee8cf4c6e5e727f818e2469843e9cf115c1acd8568d56d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-061156",
	                "Source": "/var/lib/docker/volumes/multinode-061156/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-061156",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-061156",
	                "name.minikube.sigs.k8s.io": "multinode-061156",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa2d8059ee74776cb2dbdced7ada3a8a096584764c419f5c00d2e702975d5104",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33282"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33281"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33278"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33280"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33279"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fa2d8059ee74",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-061156": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1df13cf78442",
	                        "multinode-061156"
	                    ],
	                    "NetworkID": "c1de3295e3127f1fbc2c9f7449ee46e88e353e181793abb89c1a761b6e6fd4cc",
	                    "EndpointID": "ca7267843a8ff4ca6062f1b4371840d73ec035948cc41467ec03ac38a9c4dd17",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-061156 -n multinode-061156
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-061156 logs -n 25: (1.273138097s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-732065                           | mount-start-2-732065 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-732065 ssh -- ls                    | mount-start-2-732065 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-711143                           | mount-start-1-711143 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-732065 ssh -- ls                    | mount-start-2-732065 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-732065                           | mount-start-2-732065 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	| start   | -p mount-start-2-732065                           | mount-start-2-732065 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	| ssh     | mount-start-2-732065 ssh -- ls                    | mount-start-2-732065 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-732065                           | mount-start-2-732065 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	| delete  | -p mount-start-1-711143                           | mount-start-1-711143 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	| start   | -p multinode-061156                               | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:58 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- apply -f                   | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- rollout                    | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- get pods -o                | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- get pods -o                | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- exec                       | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-4dmmg --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- exec                       | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-hwz9l --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- exec                       | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-4dmmg --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- exec                       | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-hwz9l --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- exec                       | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-4dmmg -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- exec                       | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-hwz9l -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- get pods -o                | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- exec                       | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-4dmmg                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- exec                       | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC |                     |
	|         | busybox-5bc68d56bd-4dmmg -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- exec                       | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-hwz9l                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-061156 -- exec                       | multinode-061156     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC |                     |
	|         | busybox-5bc68d56bd-hwz9l -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:56:31
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:56:31.670331  536361 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:56:31.670485  536361 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:56:31.670494  536361 out.go:309] Setting ErrFile to fd 2...
	I0116 02:56:31.670499  536361 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:56:31.670692  536361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
	I0116 02:56:31.671321  536361 out.go:303] Setting JSON to false
	I0116 02:56:31.672285  536361 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9538,"bootTime":1705364254,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:56:31.672350  536361 start.go:138] virtualization: kvm guest
	I0116 02:56:31.674607  536361 out.go:177] * [multinode-061156] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:56:31.675940  536361 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:56:31.675908  536361 notify.go:220] Checking for updates...
	I0116 02:56:31.677384  536361 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:56:31.678652  536361 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:56:31.679906  536361 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	I0116 02:56:31.681280  536361 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:56:31.682553  536361 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:56:31.683885  536361 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:56:31.704406  536361 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:56:31.704543  536361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:56:31.756396  536361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-16 02:56:31.748306037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:56:31.756507  536361 docker.go:295] overlay module found
	I0116 02:56:31.758456  536361 out.go:177] * Using the docker driver based on user configuration
	I0116 02:56:31.759754  536361 start.go:298] selected driver: docker
	I0116 02:56:31.759767  536361 start.go:902] validating driver "docker" against <nil>
	I0116 02:56:31.759782  536361 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:56:31.760579  536361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:56:31.807558  536361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-16 02:56:31.799863007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:56:31.807710  536361 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:56:31.807919  536361 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:56:31.809704  536361 out.go:177] * Using Docker driver with root privileges
	I0116 02:56:31.811099  536361 cni.go:84] Creating CNI manager for ""
	I0116 02:56:31.811116  536361 cni.go:136] 0 nodes found, recommending kindnet
	I0116 02:56:31.811131  536361 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:56:31.811143  536361 start_flags.go:321] config:
	{Name:multinode-061156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-061156 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:56:31.812496  536361 out.go:177] * Starting control plane node multinode-061156 in cluster multinode-061156
	I0116 02:56:31.813636  536361 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 02:56:31.814793  536361 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:56:31.815944  536361 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:56:31.815979  536361 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 02:56:31.815991  536361 cache.go:56] Caching tarball of preloaded images
	I0116 02:56:31.815971  536361 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:56:31.816127  536361 preload.go:174] Found /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:56:31.816146  536361 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:56:31.816506  536361 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/config.json ...
	I0116 02:56:31.816532  536361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/config.json: {Name:mk3bc0b5db4dcadc1f9a1e6401054501d965649a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:31.832141  536361 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 02:56:31.832163  536361 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 02:56:31.832186  536361 cache.go:194] Successfully downloaded all kic artifacts
	I0116 02:56:31.832235  536361 start.go:365] acquiring machines lock for multinode-061156: {Name:mk40e189c9768add72474c871d7e7020cf8cedf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:56:31.832342  536361 start.go:369] acquired machines lock for "multinode-061156" in 88.104µs
	I0116 02:56:31.832369  536361 start.go:93] Provisioning new machine with config: &{Name:multinode-061156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-061156 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:56:31.832470  536361 start.go:125] createHost starting for "" (driver="docker")
	I0116 02:56:31.834421  536361 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0116 02:56:31.834674  536361 start.go:159] libmachine.API.Create for "multinode-061156" (driver="docker")
	I0116 02:56:31.834708  536361 client.go:168] LocalClient.Create starting
	I0116 02:56:31.834765  536361 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem
	I0116 02:56:31.834801  536361 main.go:141] libmachine: Decoding PEM data...
	I0116 02:56:31.834823  536361 main.go:141] libmachine: Parsing certificate...
	I0116 02:56:31.834880  536361 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem
	I0116 02:56:31.834919  536361 main.go:141] libmachine: Decoding PEM data...
	I0116 02:56:31.834934  536361 main.go:141] libmachine: Parsing certificate...
	I0116 02:56:31.835260  536361 cli_runner.go:164] Run: docker network inspect multinode-061156 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 02:56:31.849205  536361 cli_runner.go:211] docker network inspect multinode-061156 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 02:56:31.849280  536361 network_create.go:281] running [docker network inspect multinode-061156] to gather additional debugging logs...
	I0116 02:56:31.849303  536361 cli_runner.go:164] Run: docker network inspect multinode-061156
	W0116 02:56:31.864001  536361 cli_runner.go:211] docker network inspect multinode-061156 returned with exit code 1
	I0116 02:56:31.864025  536361 network_create.go:284] error running [docker network inspect multinode-061156]: docker network inspect multinode-061156: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-061156 not found
	I0116 02:56:31.864037  536361 network_create.go:286] output of [docker network inspect multinode-061156]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-061156 not found
	
	** /stderr **
	I0116 02:56:31.864139  536361 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:56:31.879619  536361 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-24b190abdccc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:79:6f:d5:64} reservation:<nil>}
	I0116 02:56:31.880072  536361 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002397010}
	I0116 02:56:31.880100  536361 network_create.go:124] attempt to create docker network multinode-061156 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0116 02:56:31.880160  536361 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-061156 multinode-061156
	I0116 02:56:31.931689  536361 network_create.go:108] docker network multinode-061156 192.168.58.0/24 created
	I0116 02:56:31.931728  536361 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-061156" container
	I0116 02:56:31.931797  536361 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 02:56:31.946509  536361 cli_runner.go:164] Run: docker volume create multinode-061156 --label name.minikube.sigs.k8s.io=multinode-061156 --label created_by.minikube.sigs.k8s.io=true
	I0116 02:56:31.962267  536361 oci.go:103] Successfully created a docker volume multinode-061156
	I0116 02:56:31.962346  536361 cli_runner.go:164] Run: docker run --rm --name multinode-061156-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-061156 --entrypoint /usr/bin/test -v multinode-061156:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 02:56:32.438644  536361 oci.go:107] Successfully prepared a docker volume multinode-061156
	I0116 02:56:32.438706  536361 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:56:32.438734  536361 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 02:56:32.438817  536361 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-061156:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 02:56:37.633523  536361 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-061156:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.194660235s)
	I0116 02:56:37.633565  536361 kic.go:203] duration metric: took 5.194826 seconds to extract preloaded images to volume
	W0116 02:56:37.633719  536361 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 02:56:37.633834  536361 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 02:56:37.682397  536361 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-061156 --name multinode-061156 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-061156 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-061156 --network multinode-061156 --ip 192.168.58.2 --volume multinode-061156:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 02:56:37.954990  536361 cli_runner.go:164] Run: docker container inspect multinode-061156 --format={{.State.Running}}
	I0116 02:56:37.972904  536361 cli_runner.go:164] Run: docker container inspect multinode-061156 --format={{.State.Status}}
	I0116 02:56:37.990119  536361 cli_runner.go:164] Run: docker exec multinode-061156 stat /var/lib/dpkg/alternatives/iptables
	I0116 02:56:38.031719  536361 oci.go:144] the created container "multinode-061156" has a running status.
	I0116 02:56:38.031775  536361 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa...
	I0116 02:56:38.091942  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0116 02:56:38.092002  536361 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 02:56:38.113356  536361 cli_runner.go:164] Run: docker container inspect multinode-061156 --format={{.State.Status}}
	I0116 02:56:38.131452  536361 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 02:56:38.131478  536361 kic_runner.go:114] Args: [docker exec --privileged multinode-061156 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 02:56:38.170669  536361 cli_runner.go:164] Run: docker container inspect multinode-061156 --format={{.State.Status}}
	I0116 02:56:38.192236  536361 machine.go:88] provisioning docker machine ...
	I0116 02:56:38.192309  536361 ubuntu.go:169] provisioning hostname "multinode-061156"
	I0116 02:56:38.192388  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:56:38.208219  536361 main.go:141] libmachine: Using SSH client type: native
	I0116 02:56:38.208823  536361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33282 <nil> <nil>}
	I0116 02:56:38.208849  536361 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-061156 && echo "multinode-061156" | sudo tee /etc/hostname
	I0116 02:56:38.209562  536361 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60284->127.0.0.1:33282: read: connection reset by peer
	I0116 02:56:41.355124  536361 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-061156
	
	I0116 02:56:41.355231  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:56:41.371571  536361 main.go:141] libmachine: Using SSH client type: native
	I0116 02:56:41.371926  536361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33282 <nil> <nil>}
	I0116 02:56:41.371949  536361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-061156' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-061156/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-061156' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:56:41.504503  536361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:56:41.504541  536361 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17965-443749/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-443749/.minikube}
	I0116 02:56:41.504566  536361 ubuntu.go:177] setting up certificates
	I0116 02:56:41.504579  536361 provision.go:83] configureAuth start
	I0116 02:56:41.504636  536361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-061156
	I0116 02:56:41.521095  536361 provision.go:138] copyHostCerts
	I0116 02:56:41.521136  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem
	I0116 02:56:41.521170  536361 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem, removing ...
	I0116 02:56:41.521183  536361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem
	I0116 02:56:41.521250  536361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem (1078 bytes)
	I0116 02:56:41.521398  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem
	I0116 02:56:41.521430  536361 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem, removing ...
	I0116 02:56:41.521441  536361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem
	I0116 02:56:41.521484  536361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem (1123 bytes)
	I0116 02:56:41.521558  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem
	I0116 02:56:41.521588  536361 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem, removing ...
	I0116 02:56:41.521594  536361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem
	I0116 02:56:41.521632  536361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem (1675 bytes)
	I0116 02:56:41.521717  536361 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem org=jenkins.multinode-061156 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-061156]
	I0116 02:56:41.615003  536361 provision.go:172] copyRemoteCerts
	I0116 02:56:41.615070  536361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:56:41.615123  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:56:41.631951  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa Username:docker}
	I0116 02:56:41.728776  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:56:41.728837  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 02:56:41.750734  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:56:41.750800  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0116 02:56:41.772404  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:56:41.772458  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 02:56:41.793677  536361 provision.go:86] duration metric: configureAuth took 289.083311ms
	I0116 02:56:41.793706  536361 ubuntu.go:193] setting minikube options for container-runtime
	I0116 02:56:41.793895  536361 config.go:182] Loaded profile config "multinode-061156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:56:41.794024  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:56:41.809756  536361 main.go:141] libmachine: Using SSH client type: native
	I0116 02:56:41.810112  536361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33282 <nil> <nil>}
	I0116 02:56:41.810135  536361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:56:42.025574  536361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:56:42.025604  536361 machine.go:91] provisioned docker machine in 3.8333146s
	I0116 02:56:42.025614  536361 client.go:171] LocalClient.Create took 10.190899716s
	I0116 02:56:42.025633  536361 start.go:167] duration metric: libmachine.API.Create for "multinode-061156" took 10.190959477s
	I0116 02:56:42.025643  536361 start.go:300] post-start starting for "multinode-061156" (driver="docker")
	I0116 02:56:42.025661  536361 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:56:42.025735  536361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:56:42.025784  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:56:42.042282  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa Username:docker}
	I0116 02:56:42.137387  536361 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:56:42.140598  536361 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0116 02:56:42.140628  536361 command_runner.go:130] > NAME="Ubuntu"
	I0116 02:56:42.140636  536361 command_runner.go:130] > VERSION_ID="22.04"
	I0116 02:56:42.140642  536361 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0116 02:56:42.140646  536361 command_runner.go:130] > VERSION_CODENAME=jammy
	I0116 02:56:42.140650  536361 command_runner.go:130] > ID=ubuntu
	I0116 02:56:42.140654  536361 command_runner.go:130] > ID_LIKE=debian
	I0116 02:56:42.140659  536361 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0116 02:56:42.140665  536361 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0116 02:56:42.140674  536361 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0116 02:56:42.140683  536361 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0116 02:56:42.140690  536361 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0116 02:56:42.140754  536361 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 02:56:42.140783  536361 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 02:56:42.140794  536361 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 02:56:42.140803  536361 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 02:56:42.140817  536361 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-443749/.minikube/addons for local assets ...
	I0116 02:56:42.140880  536361 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-443749/.minikube/files for local assets ...
	I0116 02:56:42.140947  536361 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem -> 4505732.pem in /etc/ssl/certs
	I0116 02:56:42.140957  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem -> /etc/ssl/certs/4505732.pem
	I0116 02:56:42.141040  536361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:56:42.149007  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem --> /etc/ssl/certs/4505732.pem (1708 bytes)
	I0116 02:56:42.170788  536361 start.go:303] post-start completed in 145.122688ms
	I0116 02:56:42.171144  536361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-061156
	I0116 02:56:42.186964  536361 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/config.json ...
	I0116 02:56:42.187241  536361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:56:42.187291  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:56:42.203694  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa Username:docker}
	I0116 02:56:42.296734  536361 command_runner.go:130] > 26%!
	(MISSING)I0116 02:56:42.297047  536361 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 02:56:42.300884  536361 command_runner.go:130] > 216G
	I0116 02:56:42.301184  536361 start.go:128] duration metric: createHost completed in 10.468694597s
	I0116 02:56:42.301207  536361 start.go:83] releasing machines lock for "multinode-061156", held for 10.468852147s
	I0116 02:56:42.301278  536361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-061156
	I0116 02:56:42.317565  536361 ssh_runner.go:195] Run: cat /version.json
	I0116 02:56:42.317643  536361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:56:42.317656  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:56:42.317696  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:56:42.333939  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa Username:docker}
	I0116 02:56:42.335263  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa Username:docker}
	I0116 02:56:42.423841  536361 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1704759386-17866", "minikube_version": "v1.32.0", "commit": "3c45a4d018cdc90b01d9bcb479fb293aad58ed8f"}
	I0116 02:56:42.424016  536361 ssh_runner.go:195] Run: systemctl --version
	I0116 02:56:42.428378  536361 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0116 02:56:42.428413  536361 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0116 02:56:42.428464  536361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:56:42.515947  536361 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 02:56:42.566038  536361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:56:42.570295  536361 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0116 02:56:42.570326  536361 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0116 02:56:42.570332  536361 command_runner.go:130] > Device: 37h/55d	Inode: 1043901     Links: 1
	I0116 02:56:42.570341  536361 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:56:42.570351  536361 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0116 02:56:42.570361  536361 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0116 02:56:42.570373  536361 command_runner.go:130] > Change: 2024-01-16 02:37:07.766569517 +0000
	I0116 02:56:42.570392  536361 command_runner.go:130] >  Birth: 2024-01-16 02:37:07.766569517 +0000
	I0116 02:56:42.570451  536361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:56:42.588448  536361 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 02:56:42.588530  536361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:56:42.614642  536361 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0116 02:56:42.614722  536361 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 02:56:42.614732  536361 start.go:475] detecting cgroup driver to use...
	I0116 02:56:42.614771  536361 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 02:56:42.614825  536361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:56:42.628814  536361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:56:42.639041  536361 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:56:42.639105  536361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:56:42.651077  536361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:56:42.663475  536361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:56:42.743901  536361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:56:42.826286  536361 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 02:56:42.826327  536361 docker.go:233] disabling docker service ...
	I0116 02:56:42.826379  536361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:56:42.843534  536361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:56:42.853731  536361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:56:42.927477  536361 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 02:56:42.927577  536361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:56:42.998784  536361 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 02:56:42.998873  536361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:56:43.009574  536361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:56:43.024316  536361 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 02:56:43.024361  536361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:56:43.024407  536361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:56:43.033285  536361 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:56:43.033338  536361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:56:43.042025  536361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:56:43.050695  536361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:56:43.059438  536361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:56:43.067566  536361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:56:43.074880  536361 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 02:56:43.075625  536361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:56:43.083146  536361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:56:43.158263  536361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:56:43.242190  536361 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:56:43.242247  536361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:56:43.245616  536361 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 02:56:43.245639  536361 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 02:56:43.245648  536361 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0116 02:56:43.245660  536361 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:56:43.245669  536361 command_runner.go:130] > Access: 2024-01-16 02:56:43.230388598 +0000
	I0116 02:56:43.245678  536361 command_runner.go:130] > Modify: 2024-01-16 02:56:43.230388598 +0000
	I0116 02:56:43.245686  536361 command_runner.go:130] > Change: 2024-01-16 02:56:43.230388598 +0000
	I0116 02:56:43.245696  536361 command_runner.go:130] >  Birth: -
	I0116 02:56:43.245735  536361 start.go:543] Will wait 60s for crictl version
	I0116 02:56:43.245772  536361 ssh_runner.go:195] Run: which crictl
	I0116 02:56:43.248628  536361 command_runner.go:130] > /usr/bin/crictl
	I0116 02:56:43.248750  536361 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:56:43.280330  536361 command_runner.go:130] > Version:  0.1.0
	I0116 02:56:43.280353  536361 command_runner.go:130] > RuntimeName:  cri-o
	I0116 02:56:43.280357  536361 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0116 02:56:43.280363  536361 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 02:56:43.280379  536361 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 02:56:43.280441  536361 ssh_runner.go:195] Run: crio --version
	I0116 02:56:43.312853  536361 command_runner.go:130] > crio version 1.24.6
	I0116 02:56:43.312889  536361 command_runner.go:130] > Version:          1.24.6
	I0116 02:56:43.312911  536361 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 02:56:43.312915  536361 command_runner.go:130] > GitTreeState:     clean
	I0116 02:56:43.312928  536361 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 02:56:43.312933  536361 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 02:56:43.312937  536361 command_runner.go:130] > Compiler:         gc
	I0116 02:56:43.312942  536361 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:56:43.312947  536361 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:56:43.312954  536361 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:56:43.312961  536361 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:56:43.312965  536361 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:56:43.314374  536361 ssh_runner.go:195] Run: crio --version
	I0116 02:56:43.348695  536361 command_runner.go:130] > crio version 1.24.6
	I0116 02:56:43.348715  536361 command_runner.go:130] > Version:          1.24.6
	I0116 02:56:43.348726  536361 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 02:56:43.348730  536361 command_runner.go:130] > GitTreeState:     clean
	I0116 02:56:43.348735  536361 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 02:56:43.348740  536361 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 02:56:43.348746  536361 command_runner.go:130] > Compiler:         gc
	I0116 02:56:43.348753  536361 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:56:43.348760  536361 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:56:43.348773  536361 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:56:43.348783  536361 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:56:43.348792  536361 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:56:43.350904  536361 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0116 02:56:43.352326  536361 cli_runner.go:164] Run: docker network inspect multinode-061156 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:56:43.368929  536361 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0116 02:56:43.372607  536361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:56:43.382937  536361 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:56:43.382998  536361 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:56:43.435670  536361 command_runner.go:130] > {
	I0116 02:56:43.435690  536361 command_runner.go:130] >   "images": [
	I0116 02:56:43.435694  536361 command_runner.go:130] >     {
	I0116 02:56:43.435702  536361 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0116 02:56:43.435707  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.435712  536361 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0116 02:56:43.435716  536361 command_runner.go:130] >       ],
	I0116 02:56:43.435720  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.435728  536361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0116 02:56:43.435735  536361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0116 02:56:43.435741  536361 command_runner.go:130] >       ],
	I0116 02:56:43.435745  536361 command_runner.go:130] >       "size": "65258016",
	I0116 02:56:43.435749  536361 command_runner.go:130] >       "uid": null,
	I0116 02:56:43.435753  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.435761  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.435765  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.435769  536361 command_runner.go:130] >     },
	I0116 02:56:43.435772  536361 command_runner.go:130] >     {
	I0116 02:56:43.435778  536361 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0116 02:56:43.435785  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.435790  536361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0116 02:56:43.435796  536361 command_runner.go:130] >       ],
	I0116 02:56:43.435800  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.435807  536361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0116 02:56:43.435817  536361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0116 02:56:43.435820  536361 command_runner.go:130] >       ],
	I0116 02:56:43.435828  536361 command_runner.go:130] >       "size": "31470524",
	I0116 02:56:43.435834  536361 command_runner.go:130] >       "uid": null,
	I0116 02:56:43.435847  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.435854  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.435858  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.435864  536361 command_runner.go:130] >     },
	I0116 02:56:43.435868  536361 command_runner.go:130] >     {
	I0116 02:56:43.435874  536361 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0116 02:56:43.435885  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.435893  536361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0116 02:56:43.435897  536361 command_runner.go:130] >       ],
	I0116 02:56:43.435903  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.435910  536361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0116 02:56:43.435919  536361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0116 02:56:43.435927  536361 command_runner.go:130] >       ],
	I0116 02:56:43.435931  536361 command_runner.go:130] >       "size": "53621675",
	I0116 02:56:43.435935  536361 command_runner.go:130] >       "uid": null,
	I0116 02:56:43.435942  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.435946  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.435950  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.435958  536361 command_runner.go:130] >     },
	I0116 02:56:43.435961  536361 command_runner.go:130] >     {
	I0116 02:56:43.435968  536361 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0116 02:56:43.435974  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.435979  536361 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0116 02:56:43.435984  536361 command_runner.go:130] >       ],
	I0116 02:56:43.435988  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.435995  536361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0116 02:56:43.436004  536361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0116 02:56:43.436014  536361 command_runner.go:130] >       ],
	I0116 02:56:43.436021  536361 command_runner.go:130] >       "size": "295456551",
	I0116 02:56:43.436025  536361 command_runner.go:130] >       "uid": {
	I0116 02:56:43.436030  536361 command_runner.go:130] >         "value": "0"
	I0116 02:56:43.436034  536361 command_runner.go:130] >       },
	I0116 02:56:43.436040  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.436044  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.436049  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.436054  536361 command_runner.go:130] >     },
	I0116 02:56:43.436064  536361 command_runner.go:130] >     {
	I0116 02:56:43.436072  536361 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0116 02:56:43.436077  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.436084  536361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0116 02:56:43.436088  536361 command_runner.go:130] >       ],
	I0116 02:56:43.436094  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.436102  536361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0116 02:56:43.436111  536361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0116 02:56:43.436115  536361 command_runner.go:130] >       ],
	I0116 02:56:43.436122  536361 command_runner.go:130] >       "size": "127226832",
	I0116 02:56:43.436126  536361 command_runner.go:130] >       "uid": {
	I0116 02:56:43.436130  536361 command_runner.go:130] >         "value": "0"
	I0116 02:56:43.436136  536361 command_runner.go:130] >       },
	I0116 02:56:43.436140  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.436146  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.436150  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.436154  536361 command_runner.go:130] >     },
	I0116 02:56:43.436157  536361 command_runner.go:130] >     {
	I0116 02:56:43.436165  536361 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0116 02:56:43.436172  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.436177  536361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0116 02:56:43.436183  536361 command_runner.go:130] >       ],
	I0116 02:56:43.436187  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.436194  536361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0116 02:56:43.436204  536361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0116 02:56:43.436209  536361 command_runner.go:130] >       ],
	I0116 02:56:43.436214  536361 command_runner.go:130] >       "size": "123261750",
	I0116 02:56:43.436218  536361 command_runner.go:130] >       "uid": {
	I0116 02:56:43.436224  536361 command_runner.go:130] >         "value": "0"
	I0116 02:56:43.436228  536361 command_runner.go:130] >       },
	I0116 02:56:43.436234  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.436238  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.436242  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.436247  536361 command_runner.go:130] >     },
	I0116 02:56:43.436251  536361 command_runner.go:130] >     {
	I0116 02:56:43.436284  536361 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0116 02:56:43.436296  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.436303  536361 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0116 02:56:43.436307  536361 command_runner.go:130] >       ],
	I0116 02:56:43.436314  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.436321  536361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0116 02:56:43.436330  536361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0116 02:56:43.436338  536361 command_runner.go:130] >       ],
	I0116 02:56:43.436348  536361 command_runner.go:130] >       "size": "74749335",
	I0116 02:56:43.436358  536361 command_runner.go:130] >       "uid": null,
	I0116 02:56:43.436368  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.436373  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.436379  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.436382  536361 command_runner.go:130] >     },
	I0116 02:56:43.436388  536361 command_runner.go:130] >     {
	I0116 02:56:43.436394  536361 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0116 02:56:43.436401  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.436406  536361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0116 02:56:43.436411  536361 command_runner.go:130] >       ],
	I0116 02:56:43.436418  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.436441  536361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0116 02:56:43.436450  536361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0116 02:56:43.436454  536361 command_runner.go:130] >       ],
	I0116 02:56:43.436461  536361 command_runner.go:130] >       "size": "61551410",
	I0116 02:56:43.436465  536361 command_runner.go:130] >       "uid": {
	I0116 02:56:43.436471  536361 command_runner.go:130] >         "value": "0"
	I0116 02:56:43.436474  536361 command_runner.go:130] >       },
	I0116 02:56:43.436479  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.436483  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.436489  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.436492  536361 command_runner.go:130] >     },
	I0116 02:56:43.436498  536361 command_runner.go:130] >     {
	I0116 02:56:43.436508  536361 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0116 02:56:43.436514  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.436519  536361 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 02:56:43.436525  536361 command_runner.go:130] >       ],
	I0116 02:56:43.436529  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.436541  536361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0116 02:56:43.436550  536361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0116 02:56:43.436554  536361 command_runner.go:130] >       ],
	I0116 02:56:43.436558  536361 command_runner.go:130] >       "size": "750414",
	I0116 02:56:43.436562  536361 command_runner.go:130] >       "uid": {
	I0116 02:56:43.436566  536361 command_runner.go:130] >         "value": "65535"
	I0116 02:56:43.436572  536361 command_runner.go:130] >       },
	I0116 02:56:43.436576  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.436584  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.436591  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.436595  536361 command_runner.go:130] >     }
	I0116 02:56:43.436601  536361 command_runner.go:130] >   ]
	I0116 02:56:43.436604  536361 command_runner.go:130] > }
	I0116 02:56:43.438256  536361 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 02:56:43.438279  536361 crio.go:415] Images already preloaded, skipping extraction
	I0116 02:56:43.438322  536361 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:56:43.470086  536361 command_runner.go:130] > {
	I0116 02:56:43.470116  536361 command_runner.go:130] >   "images": [
	I0116 02:56:43.470123  536361 command_runner.go:130] >     {
	I0116 02:56:43.470135  536361 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0116 02:56:43.470143  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.470157  536361 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0116 02:56:43.470167  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470177  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.470190  536361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0116 02:56:43.470199  536361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0116 02:56:43.470205  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470210  536361 command_runner.go:130] >       "size": "65258016",
	I0116 02:56:43.470216  536361 command_runner.go:130] >       "uid": null,
	I0116 02:56:43.470220  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.470235  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.470242  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.470246  536361 command_runner.go:130] >     },
	I0116 02:56:43.470249  536361 command_runner.go:130] >     {
	I0116 02:56:43.470255  536361 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0116 02:56:43.470263  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.470268  536361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0116 02:56:43.470271  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470275  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.470283  536361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0116 02:56:43.470290  536361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0116 02:56:43.470293  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470302  536361 command_runner.go:130] >       "size": "31470524",
	I0116 02:56:43.470306  536361 command_runner.go:130] >       "uid": null,
	I0116 02:56:43.470310  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.470315  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.470319  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.470325  536361 command_runner.go:130] >     },
	I0116 02:56:43.470329  536361 command_runner.go:130] >     {
	I0116 02:56:43.470337  536361 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0116 02:56:43.470344  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.470349  536361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0116 02:56:43.470355  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470361  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.470371  536361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0116 02:56:43.470380  536361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0116 02:56:43.470386  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470390  536361 command_runner.go:130] >       "size": "53621675",
	I0116 02:56:43.470396  536361 command_runner.go:130] >       "uid": null,
	I0116 02:56:43.470401  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.470407  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.470411  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.470417  536361 command_runner.go:130] >     },
	I0116 02:56:43.470420  536361 command_runner.go:130] >     {
	I0116 02:56:43.470429  536361 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0116 02:56:43.470435  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.470440  536361 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0116 02:56:43.470446  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470450  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.470459  536361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0116 02:56:43.470466  536361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0116 02:56:43.470479  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470486  536361 command_runner.go:130] >       "size": "295456551",
	I0116 02:56:43.470490  536361 command_runner.go:130] >       "uid": {
	I0116 02:56:43.470496  536361 command_runner.go:130] >         "value": "0"
	I0116 02:56:43.470500  536361 command_runner.go:130] >       },
	I0116 02:56:43.470506  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.470510  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.470517  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.470521  536361 command_runner.go:130] >     },
	I0116 02:56:43.470527  536361 command_runner.go:130] >     {
	I0116 02:56:43.470533  536361 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0116 02:56:43.470541  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.470548  536361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0116 02:56:43.470558  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470563  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.470572  536361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0116 02:56:43.470582  536361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0116 02:56:43.470588  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470594  536361 command_runner.go:130] >       "size": "127226832",
	I0116 02:56:43.470600  536361 command_runner.go:130] >       "uid": {
	I0116 02:56:43.470604  536361 command_runner.go:130] >         "value": "0"
	I0116 02:56:43.470610  536361 command_runner.go:130] >       },
	I0116 02:56:43.470615  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.470621  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.470625  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.470630  536361 command_runner.go:130] >     },
	I0116 02:56:43.470634  536361 command_runner.go:130] >     {
	I0116 02:56:43.470643  536361 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0116 02:56:43.470649  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.470654  536361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0116 02:56:43.470660  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470665  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.470674  536361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0116 02:56:43.470684  536361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0116 02:56:43.470690  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470694  536361 command_runner.go:130] >       "size": "123261750",
	I0116 02:56:43.470706  536361 command_runner.go:130] >       "uid": {
	I0116 02:56:43.470713  536361 command_runner.go:130] >         "value": "0"
	I0116 02:56:43.470717  536361 command_runner.go:130] >       },
	I0116 02:56:43.470723  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.470727  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.470733  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.470737  536361 command_runner.go:130] >     },
	I0116 02:56:43.470741  536361 command_runner.go:130] >     {
	I0116 02:56:43.470747  536361 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0116 02:56:43.470753  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.470759  536361 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0116 02:56:43.470764  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470768  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.470778  536361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0116 02:56:43.470787  536361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0116 02:56:43.470792  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470797  536361 command_runner.go:130] >       "size": "74749335",
	I0116 02:56:43.470803  536361 command_runner.go:130] >       "uid": null,
	I0116 02:56:43.470809  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.470815  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.470819  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.470824  536361 command_runner.go:130] >     },
	I0116 02:56:43.470830  536361 command_runner.go:130] >     {
	I0116 02:56:43.470842  536361 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0116 02:56:43.470848  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.470854  536361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0116 02:56:43.470859  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470863  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.470889  536361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0116 02:56:43.470898  536361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0116 02:56:43.470904  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470909  536361 command_runner.go:130] >       "size": "61551410",
	I0116 02:56:43.470913  536361 command_runner.go:130] >       "uid": {
	I0116 02:56:43.470919  536361 command_runner.go:130] >         "value": "0"
	I0116 02:56:43.470923  536361 command_runner.go:130] >       },
	I0116 02:56:43.470929  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.470936  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.470943  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.470946  536361 command_runner.go:130] >     },
	I0116 02:56:43.470953  536361 command_runner.go:130] >     {
	I0116 02:56:43.470959  536361 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0116 02:56:43.470965  536361 command_runner.go:130] >       "repoTags": [
	I0116 02:56:43.470970  536361 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 02:56:43.470976  536361 command_runner.go:130] >       ],
	I0116 02:56:43.470980  536361 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:43.470989  536361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0116 02:56:43.470998  536361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0116 02:56:43.471002  536361 command_runner.go:130] >       ],
	I0116 02:56:43.471008  536361 command_runner.go:130] >       "size": "750414",
	I0116 02:56:43.471012  536361 command_runner.go:130] >       "uid": {
	I0116 02:56:43.471018  536361 command_runner.go:130] >         "value": "65535"
	I0116 02:56:43.471022  536361 command_runner.go:130] >       },
	I0116 02:56:43.471028  536361 command_runner.go:130] >       "username": "",
	I0116 02:56:43.471032  536361 command_runner.go:130] >       "spec": null,
	I0116 02:56:43.471041  536361 command_runner.go:130] >       "pinned": false
	I0116 02:56:43.471047  536361 command_runner.go:130] >     }
	I0116 02:56:43.471050  536361 command_runner.go:130] >   ]
	I0116 02:56:43.471056  536361 command_runner.go:130] > }
	I0116 02:56:43.471174  536361 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 02:56:43.471188  536361 cache_images.go:84] Images are preloaded, skipping loading
	I0116 02:56:43.471244  536361 ssh_runner.go:195] Run: crio config
	I0116 02:56:43.506885  536361 command_runner.go:130] ! time="2024-01-16 02:56:43.506473181Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0116 02:56:43.506911  536361 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 02:56:43.512345  536361 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 02:56:43.512372  536361 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 02:56:43.512381  536361 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 02:56:43.512390  536361 command_runner.go:130] > #
	I0116 02:56:43.512400  536361 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 02:56:43.512410  536361 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 02:56:43.512423  536361 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 02:56:43.512438  536361 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 02:56:43.512449  536361 command_runner.go:130] > # reload'.
	I0116 02:56:43.512463  536361 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 02:56:43.512485  536361 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 02:56:43.512513  536361 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 02:56:43.512530  536361 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 02:56:43.512539  536361 command_runner.go:130] > [crio]
	I0116 02:56:43.512551  536361 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 02:56:43.512563  536361 command_runner.go:130] > # containers images, in this directory.
	I0116 02:56:43.512588  536361 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0116 02:56:43.512602  536361 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 02:56:43.512612  536361 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0116 02:56:43.512626  536361 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 02:56:43.512640  536361 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 02:56:43.512650  536361 command_runner.go:130] > # storage_driver = "vfs"
	I0116 02:56:43.512661  536361 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 02:56:43.512675  536361 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 02:56:43.512684  536361 command_runner.go:130] > # storage_option = [
	I0116 02:56:43.512691  536361 command_runner.go:130] > # ]
	I0116 02:56:43.512705  536361 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 02:56:43.512719  536361 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 02:56:43.512734  536361 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 02:56:43.512747  536361 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 02:56:43.512761  536361 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 02:56:43.512771  536361 command_runner.go:130] > # always happen on a node reboot
	I0116 02:56:43.512780  536361 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 02:56:43.512793  536361 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 02:56:43.512806  536361 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 02:56:43.512832  536361 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 02:56:43.512845  536361 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 02:56:43.512861  536361 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 02:56:43.512877  536361 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 02:56:43.512888  536361 command_runner.go:130] > # internal_wipe = true
	I0116 02:56:43.512913  536361 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 02:56:43.512926  536361 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 02:56:43.512939  536361 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 02:56:43.512952  536361 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 02:56:43.512966  536361 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 02:56:43.512976  536361 command_runner.go:130] > [crio.api]
	I0116 02:56:43.512991  536361 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 02:56:43.513002  536361 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 02:56:43.513012  536361 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 02:56:43.513022  536361 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 02:56:43.513035  536361 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 02:56:43.513047  536361 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 02:56:43.513058  536361 command_runner.go:130] > # stream_port = "0"
	I0116 02:56:43.513070  536361 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 02:56:43.513078  536361 command_runner.go:130] > # stream_enable_tls = false
	I0116 02:56:43.513089  536361 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 02:56:43.513099  536361 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 02:56:43.513113  536361 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 02:56:43.513126  536361 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 02:56:43.513136  536361 command_runner.go:130] > # minutes.
	I0116 02:56:43.513145  536361 command_runner.go:130] > # stream_tls_cert = ""
	I0116 02:56:43.513159  536361 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 02:56:43.513172  536361 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 02:56:43.513182  536361 command_runner.go:130] > # stream_tls_key = ""
	I0116 02:56:43.513198  536361 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 02:56:43.513213  536361 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 02:56:43.513226  536361 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 02:56:43.513236  536361 command_runner.go:130] > # stream_tls_ca = ""
	I0116 02:56:43.513253  536361 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:56:43.513264  536361 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0116 02:56:43.513280  536361 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:56:43.513291  536361 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0116 02:56:43.513326  536361 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 02:56:43.513339  536361 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 02:56:43.513349  536361 command_runner.go:130] > [crio.runtime]
	I0116 02:56:43.513363  536361 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 02:56:43.513373  536361 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 02:56:43.513384  536361 command_runner.go:130] > # "nofile=1024:2048"
	I0116 02:56:43.513400  536361 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 02:56:43.513411  536361 command_runner.go:130] > # default_ulimits = [
	I0116 02:56:43.513419  536361 command_runner.go:130] > # ]
	I0116 02:56:43.513431  536361 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 02:56:43.513445  536361 command_runner.go:130] > # no_pivot = false
	I0116 02:56:43.513459  536361 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 02:56:43.513477  536361 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 02:56:43.513489  536361 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 02:56:43.513503  536361 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 02:56:43.513515  536361 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 02:56:43.513529  536361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:56:43.513540  536361 command_runner.go:130] > # conmon = ""
	I0116 02:56:43.513548  536361 command_runner.go:130] > # Cgroup setting for conmon
	I0116 02:56:43.513563  536361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 02:56:43.513578  536361 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 02:56:43.513591  536361 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 02:56:43.513603  536361 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 02:56:43.513618  536361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:56:43.513626  536361 command_runner.go:130] > # conmon_env = [
	I0116 02:56:43.513635  536361 command_runner.go:130] > # ]
	I0116 02:56:43.513645  536361 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 02:56:43.513657  536361 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 02:56:43.513673  536361 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 02:56:43.513683  536361 command_runner.go:130] > # default_env = [
	I0116 02:56:43.513692  536361 command_runner.go:130] > # ]
	I0116 02:56:43.513703  536361 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 02:56:43.513714  536361 command_runner.go:130] > # selinux = false
	I0116 02:56:43.513728  536361 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 02:56:43.513742  536361 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 02:56:43.513755  536361 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 02:56:43.513765  536361 command_runner.go:130] > # seccomp_profile = ""
	I0116 02:56:43.513777  536361 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 02:56:43.513788  536361 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 02:56:43.513802  536361 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 02:56:43.513813  536361 command_runner.go:130] > # which might increase security.
	I0116 02:56:43.513823  536361 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0116 02:56:43.513836  536361 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 02:56:43.513850  536361 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 02:56:43.513863  536361 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 02:56:43.513877  536361 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 02:56:43.513895  536361 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:56:43.513906  536361 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 02:56:43.513917  536361 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 02:56:43.513927  536361 command_runner.go:130] > # the cgroup blockio controller.
	I0116 02:56:43.513938  536361 command_runner.go:130] > # blockio_config_file = ""
	I0116 02:56:43.513950  536361 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 02:56:43.513960  536361 command_runner.go:130] > # irqbalance daemon.
	I0116 02:56:43.513973  536361 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 02:56:43.513987  536361 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 02:56:43.513999  536361 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:56:43.514010  536361 command_runner.go:130] > # rdt_config_file = ""
	I0116 02:56:43.514022  536361 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 02:56:43.514032  536361 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 02:56:43.514043  536361 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 02:56:43.514054  536361 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 02:56:43.514066  536361 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 02:56:43.514079  536361 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 02:56:43.514087  536361 command_runner.go:130] > # will be added.
	I0116 02:56:43.514099  536361 command_runner.go:130] > # default_capabilities = [
	I0116 02:56:43.514108  536361 command_runner.go:130] > # 	"CHOWN",
	I0116 02:56:43.514116  536361 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 02:56:43.514126  536361 command_runner.go:130] > # 	"FSETID",
	I0116 02:56:43.514135  536361 command_runner.go:130] > # 	"FOWNER",
	I0116 02:56:43.514144  536361 command_runner.go:130] > # 	"SETGID",
	I0116 02:56:43.514154  536361 command_runner.go:130] > # 	"SETUID",
	I0116 02:56:43.514161  536361 command_runner.go:130] > # 	"SETPCAP",
	I0116 02:56:43.514172  536361 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 02:56:43.514181  536361 command_runner.go:130] > # 	"KILL",
	I0116 02:56:43.514188  536361 command_runner.go:130] > # ]
	I0116 02:56:43.514204  536361 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0116 02:56:43.514219  536361 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0116 02:56:43.514230  536361 command_runner.go:130] > # add_inheritable_capabilities = true
	I0116 02:56:43.514244  536361 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 02:56:43.514257  536361 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:56:43.514269  536361 command_runner.go:130] > # default_sysctls = [
	I0116 02:56:43.514278  536361 command_runner.go:130] > # ]
	I0116 02:56:43.514292  536361 command_runner.go:130] > # List of devices on the host that a
	I0116 02:56:43.514306  536361 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 02:56:43.514316  536361 command_runner.go:130] > # allowed_devices = [
	I0116 02:56:43.514324  536361 command_runner.go:130] > # 	"/dev/fuse",
	I0116 02:56:43.514330  536361 command_runner.go:130] > # ]
	I0116 02:56:43.514345  536361 command_runner.go:130] > # List of additional devices. specified as
	I0116 02:56:43.514389  536361 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 02:56:43.514401  536361 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 02:56:43.514412  536361 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:56:43.514422  536361 command_runner.go:130] > # additional_devices = [
	I0116 02:56:43.514431  536361 command_runner.go:130] > # ]
	I0116 02:56:43.514441  536361 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 02:56:43.514451  536361 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 02:56:43.514461  536361 command_runner.go:130] > # 	"/etc/cdi",
	I0116 02:56:43.514470  536361 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 02:56:43.514479  536361 command_runner.go:130] > # ]
	I0116 02:56:43.514490  536361 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 02:56:43.514503  536361 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 02:56:43.514517  536361 command_runner.go:130] > # Defaults to false.
	I0116 02:56:43.514528  536361 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 02:56:43.514543  536361 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 02:56:43.514556  536361 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 02:56:43.514566  536361 command_runner.go:130] > # hooks_dir = [
	I0116 02:56:43.514578  536361 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 02:56:43.514587  536361 command_runner.go:130] > # ]
	I0116 02:56:43.514599  536361 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 02:56:43.514613  536361 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 02:56:43.514625  536361 command_runner.go:130] > # its default mounts from the following two files:
	I0116 02:56:43.514634  536361 command_runner.go:130] > #
	I0116 02:56:43.514645  536361 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 02:56:43.514659  536361 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 02:56:43.514670  536361 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 02:56:43.514679  536361 command_runner.go:130] > #
	I0116 02:56:43.514690  536361 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 02:56:43.514704  536361 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 02:56:43.514718  536361 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 02:56:43.514734  536361 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 02:56:43.514743  536361 command_runner.go:130] > #
	I0116 02:56:43.514751  536361 command_runner.go:130] > # default_mounts_file = ""
	I0116 02:56:43.514763  536361 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 02:56:43.514782  536361 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 02:56:43.514792  536361 command_runner.go:130] > # pids_limit = 0
	I0116 02:56:43.514807  536361 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 02:56:43.514821  536361 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 02:56:43.514834  536361 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 02:56:43.514851  536361 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 02:56:43.514866  536361 command_runner.go:130] > # log_size_max = -1
	I0116 02:56:43.514881  536361 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 02:56:43.514892  536361 command_runner.go:130] > # log_to_journald = false
	I0116 02:56:43.514905  536361 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 02:56:43.514917  536361 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 02:56:43.514926  536361 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 02:56:43.514938  536361 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 02:56:43.514950  536361 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 02:56:43.514964  536361 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 02:56:43.514977  536361 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 02:56:43.514988  536361 command_runner.go:130] > # read_only = false
	I0116 02:56:43.515000  536361 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 02:56:43.515013  536361 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 02:56:43.515024  536361 command_runner.go:130] > # live configuration reload.
	I0116 02:56:43.515032  536361 command_runner.go:130] > # log_level = "info"
	I0116 02:56:43.515045  536361 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 02:56:43.515058  536361 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:56:43.515068  536361 command_runner.go:130] > # log_filter = ""
	I0116 02:56:43.515079  536361 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 02:56:43.515090  536361 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 02:56:43.515100  536361 command_runner.go:130] > # separated by comma.
	I0116 02:56:43.515110  536361 command_runner.go:130] > # uid_mappings = ""
	I0116 02:56:43.515122  536361 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 02:56:43.515135  536361 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 02:56:43.515145  536361 command_runner.go:130] > # separated by comma.
	I0116 02:56:43.515153  536361 command_runner.go:130] > # gid_mappings = ""
	I0116 02:56:43.515171  536361 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 02:56:43.515185  536361 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:56:43.515199  536361 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:56:43.515209  536361 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 02:56:43.515220  536361 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 02:56:43.515234  536361 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:56:43.515248  536361 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:56:43.515258  536361 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 02:56:43.515270  536361 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 02:56:43.515283  536361 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 02:56:43.515299  536361 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 02:56:43.515309  536361 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 02:56:43.515322  536361 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 02:56:43.515339  536361 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 02:56:43.515351  536361 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 02:56:43.515362  536361 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 02:56:43.515372  536361 command_runner.go:130] > # drop_infra_ctr = true
	I0116 02:56:43.515388  536361 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 02:56:43.515403  536361 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 02:56:43.515420  536361 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 02:56:43.515430  536361 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 02:56:43.515442  536361 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 02:56:43.515454  536361 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 02:56:43.515465  536361 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 02:56:43.515478  536361 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 02:56:43.515487  536361 command_runner.go:130] > # pinns_path = ""
	I0116 02:56:43.515498  536361 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 02:56:43.515512  536361 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 02:56:43.515526  536361 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 02:56:43.515540  536361 command_runner.go:130] > # default_runtime = "runc"
	I0116 02:56:43.515552  536361 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 02:56:43.515568  536361 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 02:56:43.515589  536361 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 02:56:43.515602  536361 command_runner.go:130] > # creation as a file is not desired either.
	I0116 02:56:43.515619  536361 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 02:56:43.515630  536361 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 02:56:43.515646  536361 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 02:56:43.515655  536361 command_runner.go:130] > # ]
	I0116 02:56:43.515666  536361 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 02:56:43.515680  536361 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 02:56:43.515695  536361 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 02:56:43.515713  536361 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 02:56:43.515721  536361 command_runner.go:130] > #
	I0116 02:56:43.515731  536361 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 02:56:43.515743  536361 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 02:56:43.515753  536361 command_runner.go:130] > #  runtime_type = "oci"
	I0116 02:56:43.515768  536361 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 02:56:43.515780  536361 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 02:56:43.515791  536361 command_runner.go:130] > #  allowed_annotations = []
	I0116 02:56:43.515798  536361 command_runner.go:130] > # Where:
	I0116 02:56:43.515813  536361 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 02:56:43.515827  536361 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 02:56:43.515841  536361 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 02:56:43.515855  536361 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 02:56:43.515866  536361 command_runner.go:130] > #   in $PATH.
	I0116 02:56:43.515880  536361 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 02:56:43.515892  536361 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 02:56:43.515905  536361 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 02:56:43.515915  536361 command_runner.go:130] > #   state.
	I0116 02:56:43.515928  536361 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 02:56:43.515940  536361 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 02:56:43.515952  536361 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 02:56:43.515965  536361 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 02:56:43.515978  536361 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 02:56:43.515993  536361 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 02:56:43.516004  536361 command_runner.go:130] > #   The currently recognized values are:
	I0116 02:56:43.516018  536361 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 02:56:43.516033  536361 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 02:56:43.516047  536361 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 02:56:43.516060  536361 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 02:56:43.516077  536361 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 02:56:43.516091  536361 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 02:56:43.516111  536361 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 02:56:43.516125  536361 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 02:56:43.516136  536361 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 02:56:43.516147  536361 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 02:56:43.516159  536361 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0116 02:56:43.516169  536361 command_runner.go:130] > runtime_type = "oci"
	I0116 02:56:43.516178  536361 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 02:56:43.516189  536361 command_runner.go:130] > runtime_config_path = ""
	I0116 02:56:43.516197  536361 command_runner.go:130] > monitor_path = ""
	I0116 02:56:43.516206  536361 command_runner.go:130] > monitor_cgroup = ""
	I0116 02:56:43.516214  536361 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 02:56:43.516295  536361 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 02:56:43.516306  536361 command_runner.go:130] > # running containers
	I0116 02:56:43.516314  536361 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 02:56:43.516328  536361 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 02:56:43.516342  536361 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 02:56:43.516355  536361 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 02:56:43.516368  536361 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 02:56:43.516383  536361 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 02:56:43.516393  536361 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 02:56:43.516402  536361 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 02:56:43.516414  536361 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 02:56:43.516425  536361 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 02:56:43.516439  536361 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 02:56:43.516452  536361 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 02:56:43.516466  536361 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 02:56:43.516486  536361 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 02:56:43.516503  536361 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 02:56:43.516516  536361 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 02:56:43.516534  536361 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 02:56:43.516553  536361 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 02:56:43.516566  536361 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 02:56:43.516583  536361 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 02:56:43.516593  536361 command_runner.go:130] > # Example:
	I0116 02:56:43.516603  536361 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 02:56:43.516616  536361 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 02:56:43.516631  536361 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 02:56:43.516644  536361 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 02:56:43.516653  536361 command_runner.go:130] > # cpuset = 0
	I0116 02:56:43.516660  536361 command_runner.go:130] > # cpushares = "0-1"
	I0116 02:56:43.516670  536361 command_runner.go:130] > # Where:
	I0116 02:56:43.516680  536361 command_runner.go:130] > # The workload name is workload-type.
	I0116 02:56:43.516695  536361 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 02:56:43.516708  536361 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 02:56:43.516721  536361 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 02:56:43.516738  536361 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 02:56:43.516753  536361 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 02:56:43.516760  536361 command_runner.go:130] > # 
	I0116 02:56:43.516773  536361 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 02:56:43.516781  536361 command_runner.go:130] > #
	I0116 02:56:43.516792  536361 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 02:56:43.516806  536361 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 02:56:43.516820  536361 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 02:56:43.516834  536361 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 02:56:43.516850  536361 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 02:56:43.516860  536361 command_runner.go:130] > [crio.image]
	I0116 02:56:43.516871  536361 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 02:56:43.516883  536361 command_runner.go:130] > # default_transport = "docker://"
	I0116 02:56:43.516897  536361 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 02:56:43.516911  536361 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:56:43.516920  536361 command_runner.go:130] > # global_auth_file = ""
	I0116 02:56:43.516930  536361 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 02:56:43.516942  536361 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:56:43.516953  536361 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 02:56:43.516968  536361 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 02:56:43.516981  536361 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:56:43.516993  536361 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:56:43.517003  536361 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 02:56:43.517014  536361 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 02:56:43.517040  536361 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 02:56:43.517054  536361 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 02:56:43.517068  536361 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 02:56:43.517082  536361 command_runner.go:130] > # pause_command = "/pause"
	I0116 02:56:43.517096  536361 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 02:56:43.517108  536361 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 02:56:43.517122  536361 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 02:56:43.517136  536361 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 02:56:43.517148  536361 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 02:56:43.517158  536361 command_runner.go:130] > # signature_policy = ""
	I0116 02:56:43.517175  536361 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 02:56:43.517189  536361 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 02:56:43.517198  536361 command_runner.go:130] > # changing them here.
	I0116 02:56:43.517211  536361 command_runner.go:130] > # insecure_registries = [
	I0116 02:56:43.517220  536361 command_runner.go:130] > # ]
	I0116 02:56:43.517232  536361 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 02:56:43.517244  536361 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 02:56:43.517254  536361 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 02:56:43.517264  536361 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 02:56:43.517275  536361 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 02:56:43.517286  536361 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 02:56:43.517299  536361 command_runner.go:130] > # CNI plugins.
	I0116 02:56:43.517309  536361 command_runner.go:130] > [crio.network]
	I0116 02:56:43.517321  536361 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 02:56:43.517333  536361 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 02:56:43.517344  536361 command_runner.go:130] > # cni_default_network = ""
	I0116 02:56:43.517357  536361 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 02:56:43.517370  536361 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 02:56:43.517386  536361 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 02:56:43.517396  536361 command_runner.go:130] > # plugin_dirs = [
	I0116 02:56:43.517406  536361 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 02:56:43.517414  536361 command_runner.go:130] > # ]
	I0116 02:56:43.517427  536361 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 02:56:43.517436  536361 command_runner.go:130] > [crio.metrics]
	I0116 02:56:43.517446  536361 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 02:56:43.517456  536361 command_runner.go:130] > # enable_metrics = false
	I0116 02:56:43.517468  536361 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 02:56:43.517477  536361 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 02:56:43.517491  536361 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 02:56:43.517508  536361 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 02:56:43.517546  536361 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 02:56:43.517562  536361 command_runner.go:130] > # metrics_collectors = [
	I0116 02:56:43.517576  536361 command_runner.go:130] > # 	"operations",
	I0116 02:56:43.517588  536361 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 02:56:43.517599  536361 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 02:56:43.517607  536361 command_runner.go:130] > # 	"operations_errors",
	I0116 02:56:43.517618  536361 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 02:56:43.517626  536361 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 02:56:43.517637  536361 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 02:56:43.517646  536361 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 02:56:43.517656  536361 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 02:56:43.517665  536361 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 02:56:43.517675  536361 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 02:56:43.517683  536361 command_runner.go:130] > # 	"containers_oom_total",
	I0116 02:56:43.517691  536361 command_runner.go:130] > # 	"containers_oom",
	I0116 02:56:43.517701  536361 command_runner.go:130] > # 	"processes_defunct",
	I0116 02:56:43.517711  536361 command_runner.go:130] > # 	"operations_total",
	I0116 02:56:43.517724  536361 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 02:56:43.517739  536361 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 02:56:43.517750  536361 command_runner.go:130] > # 	"operations_errors_total",
	I0116 02:56:43.517758  536361 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 02:56:43.517769  536361 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 02:56:43.517777  536361 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 02:56:43.517788  536361 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 02:56:43.517797  536361 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 02:56:43.517808  536361 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 02:56:43.517817  536361 command_runner.go:130] > # ]
	I0116 02:56:43.517827  536361 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 02:56:43.517837  536361 command_runner.go:130] > # metrics_port = 9090
	I0116 02:56:43.517849  536361 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 02:56:43.517860  536361 command_runner.go:130] > # metrics_socket = ""
	I0116 02:56:43.517872  536361 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 02:56:43.517886  536361 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 02:56:43.517900  536361 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 02:56:43.517912  536361 command_runner.go:130] > # certificate on any modification event.
	I0116 02:56:43.517926  536361 command_runner.go:130] > # metrics_cert = ""
	I0116 02:56:43.517939  536361 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 02:56:43.517948  536361 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 02:56:43.517958  536361 command_runner.go:130] > # metrics_key = ""
	I0116 02:56:43.517971  536361 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 02:56:43.517981  536361 command_runner.go:130] > [crio.tracing]
	I0116 02:56:43.517994  536361 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 02:56:43.518005  536361 command_runner.go:130] > # enable_tracing = false
	I0116 02:56:43.518018  536361 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 02:56:43.518028  536361 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 02:56:43.518037  536361 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 02:56:43.518049  536361 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 02:56:43.518062  536361 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 02:56:43.518072  536361 command_runner.go:130] > [crio.stats]
	I0116 02:56:43.518086  536361 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 02:56:43.518099  536361 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 02:56:43.518110  536361 command_runner.go:130] > # stats_collection_period = 0
	I0116 02:56:43.518205  536361 cni.go:84] Creating CNI manager for ""
	I0116 02:56:43.518223  536361 cni.go:136] 1 nodes found, recommending kindnet
	I0116 02:56:43.518246  536361 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:56:43.518276  536361 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-061156 NodeName:multinode-061156 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:56:43.518449  536361 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-061156"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:56:43.518521  536361 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-061156 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-061156 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:56:43.518596  536361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:56:43.526984  536361 command_runner.go:130] > kubeadm
	I0116 02:56:43.527008  536361 command_runner.go:130] > kubectl
	I0116 02:56:43.527015  536361 command_runner.go:130] > kubelet
	I0116 02:56:43.527042  536361 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:56:43.527104  536361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:56:43.535252  536361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0116 02:56:43.551543  536361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:56:43.567851  536361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0116 02:56:43.583804  536361 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0116 02:56:43.587011  536361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:56:43.597189  536361 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156 for IP: 192.168.58.2
	I0116 02:56:43.597227  536361 certs.go:190] acquiring lock for shared ca certs: {Name:mk8883b8c07de4938a73ea389443b00589415803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:43.597371  536361 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key
	I0116 02:56:43.597431  536361 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key
	I0116 02:56:43.597491  536361 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.key
	I0116 02:56:43.597507  536361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.crt with IP's: []
	I0116 02:56:43.739838  536361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.crt ...
	I0116 02:56:43.739873  536361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.crt: {Name:mk46549ef2340d2c894029b83c4a90780c24f571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:43.740049  536361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.key ...
	I0116 02:56:43.740062  536361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.key: {Name:mk2ab6b3f89590b2c5132b0a50980e14b6323976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:43.740135  536361 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.key.cee25041
	I0116 02:56:43.740148  536361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:56:43.810345  536361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.crt.cee25041 ...
	I0116 02:56:43.810378  536361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.crt.cee25041: {Name:mk9f77015c4f55bb0cb3255097f3a8c657aa39cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:43.810538  536361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.key.cee25041 ...
	I0116 02:56:43.810551  536361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.key.cee25041: {Name:mk96138bab58f2d89d2ae84653ce593106988e99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:43.810614  536361 certs.go:337] copying /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.crt
	I0116 02:56:43.810710  536361 certs.go:341] copying /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.key
	I0116 02:56:43.810768  536361 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/proxy-client.key
	I0116 02:56:43.810782  536361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/proxy-client.crt with IP's: []
	I0116 02:56:43.904719  536361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/proxy-client.crt ...
	I0116 02:56:43.904759  536361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/proxy-client.crt: {Name:mk40c53cf055351b7eaa01627af85e0df7daf4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:43.904913  536361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/proxy-client.key ...
	I0116 02:56:43.904930  536361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/proxy-client.key: {Name:mk592976b99ac6587ee338098660c0f568c5b3a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:43.904998  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 02:56:43.905015  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 02:56:43.905025  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 02:56:43.905042  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 02:56:43.905057  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:56:43.905070  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:56:43.905083  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:56:43.905093  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:56:43.905144  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573.pem (1338 bytes)
	W0116 02:56:43.905176  536361 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573_empty.pem, impossibly tiny 0 bytes
	I0116 02:56:43.905190  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 02:56:43.905212  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem (1078 bytes)
	I0116 02:56:43.905241  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:56:43.905271  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem (1675 bytes)
	I0116 02:56:43.905310  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem (1708 bytes)
	I0116 02:56:43.905339  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573.pem -> /usr/share/ca-certificates/450573.pem
	I0116 02:56:43.905359  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem -> /usr/share/ca-certificates/4505732.pem
	I0116 02:56:43.905374  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:56:43.906011  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:56:43.928016  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 02:56:43.949125  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:56:43.970935  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 02:56:43.992137  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:56:44.014281  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:56:44.036398  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:56:44.057307  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:56:44.078928  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573.pem --> /usr/share/ca-certificates/450573.pem (1338 bytes)
	I0116 02:56:44.101233  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem --> /usr/share/ca-certificates/4505732.pem (1708 bytes)
	I0116 02:56:44.122793  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:56:44.143855  536361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:56:44.159633  536361 ssh_runner.go:195] Run: openssl version
	I0116 02:56:44.164333  536361 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0116 02:56:44.164643  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/450573.pem && ln -fs /usr/share/ca-certificates/450573.pem /etc/ssl/certs/450573.pem"
	I0116 02:56:44.173048  536361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/450573.pem
	I0116 02:56:44.176037  536361 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:43 /usr/share/ca-certificates/450573.pem
	I0116 02:56:44.176092  536361 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:43 /usr/share/ca-certificates/450573.pem
	I0116 02:56:44.176138  536361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/450573.pem
	I0116 02:56:44.182023  536361 command_runner.go:130] > 51391683
	I0116 02:56:44.182247  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/450573.pem /etc/ssl/certs/51391683.0"
	I0116 02:56:44.190854  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4505732.pem && ln -fs /usr/share/ca-certificates/4505732.pem /etc/ssl/certs/4505732.pem"
	I0116 02:56:44.199757  536361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4505732.pem
	I0116 02:56:44.202967  536361 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:43 /usr/share/ca-certificates/4505732.pem
	I0116 02:56:44.203006  536361 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:43 /usr/share/ca-certificates/4505732.pem
	I0116 02:56:44.203051  536361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4505732.pem
	I0116 02:56:44.209288  536361 command_runner.go:130] > 3ec20f2e
	I0116 02:56:44.209376  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4505732.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:56:44.217996  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:56:44.226420  536361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:56:44.229556  536361 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:56:44.229608  536361 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:56:44.229655  536361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:56:44.235644  536361 command_runner.go:130] > b5213941
	I0116 02:56:44.235851  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:56:44.244398  536361 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:56:44.247388  536361 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:56:44.247465  536361 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:56:44.247519  536361 kubeadm.go:404] StartCluster: {Name:multinode-061156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-061156 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:56:44.247632  536361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 02:56:44.247687  536361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:56:44.280508  536361 cri.go:89] found id: ""
	I0116 02:56:44.280568  536361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:56:44.288663  536361 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0116 02:56:44.288692  536361 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0116 02:56:44.288699  536361 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0116 02:56:44.288773  536361 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:56:44.296693  536361 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 02:56:44.296739  536361 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:56:44.304616  536361 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0116 02:56:44.304650  536361 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0116 02:56:44.304662  536361 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0116 02:56:44.304675  536361 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:56:44.304708  536361 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:56:44.304741  536361 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 02:56:44.348969  536361 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 02:56:44.349009  536361 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0116 02:56:44.349086  536361 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:56:44.349101  536361 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 02:56:44.384747  536361 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 02:56:44.384778  536361 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0116 02:56:44.384838  536361 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-gcp
	I0116 02:56:44.384849  536361 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1048-gcp
	I0116 02:56:44.384913  536361 kubeadm.go:322] OS: Linux
	I0116 02:56:44.384923  536361 command_runner.go:130] > OS: Linux
	I0116 02:56:44.384984  536361 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 02:56:44.384999  536361 command_runner.go:130] > CGROUPS_CPU: enabled
	I0116 02:56:44.385047  536361 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 02:56:44.385058  536361 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0116 02:56:44.385130  536361 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 02:56:44.385142  536361 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0116 02:56:44.385214  536361 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 02:56:44.385227  536361 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0116 02:56:44.385290  536361 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 02:56:44.385301  536361 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0116 02:56:44.385365  536361 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 02:56:44.385377  536361 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0116 02:56:44.385436  536361 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0116 02:56:44.385447  536361 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0116 02:56:44.385519  536361 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0116 02:56:44.385531  536361 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0116 02:56:44.385574  536361 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0116 02:56:44.385585  536361 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0116 02:56:44.449890  536361 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:56:44.449949  536361 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:56:44.450096  536361 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:56:44.450112  536361 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:56:44.450232  536361 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:56:44.450244  536361 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:56:44.646834  536361 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:56:44.650063  536361 out.go:204]   - Generating certificates and keys ...
	I0116 02:56:44.646891  536361 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:56:44.650168  536361 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0116 02:56:44.650180  536361 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:56:44.650265  536361 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0116 02:56:44.650287  536361 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:56:45.095873  536361 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:56:45.095909  536361 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:56:45.277881  536361 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:56:45.277914  536361 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:56:45.547196  536361 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:56:45.547249  536361 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0116 02:56:45.626406  536361 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:56:45.626444  536361 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0116 02:56:45.761406  536361 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:56:45.761440  536361 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0116 02:56:45.761585  536361 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-061156] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 02:56:45.761628  536361 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-061156] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 02:56:45.977615  536361 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:56:45.977648  536361 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0116 02:56:45.977748  536361 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-061156] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 02:56:45.977756  536361 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-061156] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 02:56:46.215830  536361 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:56:46.215861  536361 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:56:46.321657  536361 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:56:46.321701  536361 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:56:46.676498  536361 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:56:46.676536  536361 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0116 02:56:46.676643  536361 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:56:46.676672  536361 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:56:46.992742  536361 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:56:46.992780  536361 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:56:47.385284  536361 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:56:47.385317  536361 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:56:47.511712  536361 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:56:47.511753  536361 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:56:47.816512  536361 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:56:47.816550  536361 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:56:47.817133  536361 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:56:47.817155  536361 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:56:47.820515  536361 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:56:47.823465  536361 out.go:204]   - Booting up control plane ...
	I0116 02:56:47.820605  536361 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:56:47.823591  536361 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:56:47.823618  536361 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:56:47.823710  536361 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:56:47.823723  536361 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:56:47.823792  536361 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:56:47.823800  536361 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:56:47.830936  536361 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:56:47.830958  536361 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:56:47.831703  536361 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:56:47.831723  536361 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:56:47.831786  536361 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:56:47.831798  536361 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 02:56:47.907151  536361 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:56:47.907189  536361 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:56:52.909862  536361 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002816 seconds
	I0116 02:56:52.909888  536361 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002816 seconds
	I0116 02:56:52.910058  536361 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:56:52.910070  536361 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:56:52.922078  536361 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:56:52.922100  536361 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:56:53.439020  536361 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:56:53.439047  536361 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:56:53.439219  536361 kubeadm.go:322] [mark-control-plane] Marking the node multinode-061156 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:56:53.439245  536361 command_runner.go:130] > [mark-control-plane] Marking the node multinode-061156 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:56:53.948426  536361 kubeadm.go:322] [bootstrap-token] Using token: i0mhmk.fvc9wk8swjhxb121
	I0116 02:56:53.949924  536361 out.go:204]   - Configuring RBAC rules ...
	I0116 02:56:53.948472  536361 command_runner.go:130] > [bootstrap-token] Using token: i0mhmk.fvc9wk8swjhxb121
	I0116 02:56:53.950094  536361 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:56:53.950125  536361 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:56:53.953663  536361 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:56:53.953686  536361 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:56:53.959488  536361 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:56:53.959506  536361 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:56:53.962701  536361 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:56:53.962718  536361 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:56:53.965959  536361 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:56:53.965980  536361 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:56:53.968723  536361 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:56:53.968741  536361 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:56:53.979573  536361 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:56:53.979593  536361 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:56:54.200665  536361 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:56:54.200699  536361 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0116 02:56:54.405541  536361 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:56:54.405574  536361 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0116 02:56:54.406757  536361 kubeadm.go:322] 
	I0116 02:56:54.406905  536361 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:56:54.406936  536361 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0116 02:56:54.406950  536361 kubeadm.go:322] 
	I0116 02:56:54.407075  536361 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:56:54.407099  536361 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0116 02:56:54.407106  536361 kubeadm.go:322] 
	I0116 02:56:54.407135  536361 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:56:54.407146  536361 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0116 02:56:54.407215  536361 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:56:54.407227  536361 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:56:54.407287  536361 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:56:54.407297  536361 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:56:54.407304  536361 kubeadm.go:322] 
	I0116 02:56:54.407378  536361 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 02:56:54.407388  536361 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0116 02:56:54.407393  536361 kubeadm.go:322] 
	I0116 02:56:54.407455  536361 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:56:54.407465  536361 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:56:54.407471  536361 kubeadm.go:322] 
	I0116 02:56:54.407540  536361 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:56:54.407551  536361 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0116 02:56:54.407647  536361 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:56:54.407664  536361 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:56:54.407756  536361 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:56:54.407766  536361 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:56:54.407772  536361 kubeadm.go:322] 
	I0116 02:56:54.407882  536361 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:56:54.407891  536361 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:56:54.407992  536361 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:56:54.408002  536361 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0116 02:56:54.408007  536361 kubeadm.go:322] 
	I0116 02:56:54.408116  536361 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token i0mhmk.fvc9wk8swjhxb121 \
	I0116 02:56:54.408127  536361 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token i0mhmk.fvc9wk8swjhxb121 \
	I0116 02:56:54.408273  536361 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8cf2f52e6e786139868a71d0da6c4e60f90166b48a1f8c1755e09d650797d85a \
	I0116 02:56:54.408285  536361 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8cf2f52e6e786139868a71d0da6c4e60f90166b48a1f8c1755e09d650797d85a \
	I0116 02:56:54.408314  536361 kubeadm.go:322] 	--control-plane 
	I0116 02:56:54.408325  536361 command_runner.go:130] > 	--control-plane 
	I0116 02:56:54.408337  536361 kubeadm.go:322] 
	I0116 02:56:54.408458  536361 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:56:54.408469  536361 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:56:54.408475  536361 kubeadm.go:322] 
	I0116 02:56:54.408591  536361 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token i0mhmk.fvc9wk8swjhxb121 \
	I0116 02:56:54.408621  536361 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token i0mhmk.fvc9wk8swjhxb121 \
	I0116 02:56:54.408761  536361 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8cf2f52e6e786139868a71d0da6c4e60f90166b48a1f8c1755e09d650797d85a 
	I0116 02:56:54.408779  536361 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8cf2f52e6e786139868a71d0da6c4e60f90166b48a1f8c1755e09d650797d85a 
	I0116 02:56:54.411094  536361 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-gcp\n", err: exit status 1
	I0116 02:56:54.411118  536361 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-gcp\n", err: exit status 1
	I0116 02:56:54.411280  536361 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:56:54.411299  536361 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:56:54.411339  536361 cni.go:84] Creating CNI manager for ""
	I0116 02:56:54.411352  536361 cni.go:136] 1 nodes found, recommending kindnet
	I0116 02:56:54.412981  536361 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 02:56:54.414135  536361 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:56:54.418140  536361 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 02:56:54.418179  536361 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0116 02:56:54.418206  536361 command_runner.go:130] > Device: 37h/55d	Inode: 1047659     Links: 1
	I0116 02:56:54.418222  536361 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:56:54.418231  536361 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0116 02:56:54.418243  536361 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0116 02:56:54.418255  536361 command_runner.go:130] > Change: 2024-01-16 02:37:08.170599703 +0000
	I0116 02:56:54.418268  536361 command_runner.go:130] >  Birth: 2024-01-16 02:37:08.142597611 +0000
	I0116 02:56:54.418335  536361 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:56:54.418350  536361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:56:54.436914  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:56:55.097540  536361 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0116 02:56:55.101957  536361 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0116 02:56:55.109701  536361 command_runner.go:130] > serviceaccount/kindnet created
	I0116 02:56:55.118244  536361 command_runner.go:130] > daemonset.apps/kindnet created
	I0116 02:56:55.122697  536361 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:56:55.122775  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:55.122801  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=multinode-061156 minikube.k8s.io/updated_at=2024_01_16T02_56_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:55.210229  536361 command_runner.go:130] > node/multinode-061156 labeled
	I0116 02:56:55.212761  536361 command_runner.go:130] > -16
	I0116 02:56:55.212791  536361 ops.go:34] apiserver oom_adj: -16
	I0116 02:56:55.212816  536361 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0116 02:56:55.212959  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:55.277240  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:56:55.713468  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:55.777402  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:56:56.212973  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:56.275083  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:56:56.713255  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:56.773966  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:56:57.213402  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:57.274589  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:56:57.713764  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:57.777165  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:56:58.213453  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:58.276916  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:56:58.713448  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:58.777206  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:56:59.213427  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:59.274854  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:56:59.713993  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:56:59.777478  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:00.213049  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:00.279012  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:00.713511  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:00.778883  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:01.213368  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:01.277580  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:01.713915  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:01.775652  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:02.213109  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:02.276482  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:02.713041  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:02.772822  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:03.213965  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:03.273736  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:03.713260  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:03.777883  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:04.213442  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:04.273008  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:04.713029  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:04.776951  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:05.213745  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:05.279182  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:05.713478  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:05.775579  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:06.213999  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:06.277681  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:06.713524  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:06.775764  536361 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:07.213012  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:07.275839  536361 command_runner.go:130] > NAME      SECRETS   AGE
	I0116 02:57:07.275898  536361 command_runner.go:130] > default   0         1s
	I0116 02:57:07.278356  536361 kubeadm.go:1088] duration metric: took 12.155636018s to wait for elevateKubeSystemPrivileges.
	I0116 02:57:07.278396  536361 kubeadm.go:406] StartCluster complete in 23.030881416s
	I0116 02:57:07.278424  536361 settings.go:142] acquiring lock: {Name:mk9828dcd1e8ccfccc84768ea3ab177cb7be8afc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:57:07.278498  536361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:57:07.279476  536361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/kubeconfig: {Name:mka24a12b8e1d963a345dadb59b1cdf4f4debade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:57:07.279723  536361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:57:07.279815  536361 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 02:57:07.279904  536361 addons.go:69] Setting storage-provisioner=true in profile "multinode-061156"
	I0116 02:57:07.279929  536361 addons.go:234] Setting addon storage-provisioner=true in "multinode-061156"
	I0116 02:57:07.279929  536361 config.go:182] Loaded profile config "multinode-061156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:57:07.279927  536361 addons.go:69] Setting default-storageclass=true in profile "multinode-061156"
	I0116 02:57:07.279990  536361 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-061156"
	I0116 02:57:07.280019  536361 host.go:66] Checking if "multinode-061156" exists ...
	I0116 02:57:07.280043  536361 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:57:07.280441  536361 cli_runner.go:164] Run: docker container inspect multinode-061156 --format={{.State.Status}}
	I0116 02:57:07.280398  536361 kapi.go:59] client config for multinode-061156: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.key", CAFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:57:07.280584  536361 cli_runner.go:164] Run: docker container inspect multinode-061156 --format={{.State.Status}}
	I0116 02:57:07.281228  536361 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 02:57:07.281629  536361 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:57:07.281646  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:07.281662  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:07.281671  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:07.290616  536361 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0116 02:57:07.290641  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:07.290652  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:07.290661  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:07.290669  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:07.290678  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:07.290692  536361 round_trippers.go:580]     Content-Length: 291
	I0116 02:57:07.290701  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:07 GMT
	I0116 02:57:07.290713  536361 round_trippers.go:580]     Audit-Id: 7e587c9d-6670-4c12-ade3-e582a229c4fc
	I0116 02:57:07.290746  536361 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0402feb2-a751-4ef6-b708-443a517c68b1","resourceVersion":"345","creationTimestamp":"2024-01-16T02:56:54Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:57:07.291147  536361 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0402feb2-a751-4ef6-b708-443a517c68b1","resourceVersion":"345","creationTimestamp":"2024-01-16T02:56:54Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:57:07.291208  536361 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:57:07.291221  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:07.291229  536361 round_trippers.go:473]     Content-Type: application/json
	I0116 02:57:07.291235  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:07.291243  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:07.298678  536361 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 02:57:07.298708  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:07.298719  536361 round_trippers.go:580]     Content-Length: 291
	I0116 02:57:07.298728  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:07 GMT
	I0116 02:57:07.298736  536361 round_trippers.go:580]     Audit-Id: be3e890d-ee73-4bf8-896b-e056133a3858
	I0116 02:57:07.298744  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:07.298753  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:07.298764  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:07.298773  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:07.298805  536361 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0402feb2-a751-4ef6-b708-443a517c68b1","resourceVersion":"350","creationTimestamp":"2024-01-16T02:56:54Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:57:07.302970  536361 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:57:07.305878  536361 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:57:07.305900  536361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:57:07.305967  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:57:07.306104  536361 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:57:07.306431  536361 kapi.go:59] client config for multinode-061156: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.key", CAFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:57:07.306800  536361 addons.go:234] Setting addon default-storageclass=true in "multinode-061156"
	I0116 02:57:07.306846  536361 host.go:66] Checking if "multinode-061156" exists ...
	I0116 02:57:07.307419  536361 cli_runner.go:164] Run: docker container inspect multinode-061156 --format={{.State.Status}}
	I0116 02:57:07.326513  536361 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:57:07.326546  536361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:57:07.326614  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:57:07.326820  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa Username:docker}
	I0116 02:57:07.348384  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa Username:docker}
	I0116 02:57:07.360875  536361 command_runner.go:130] > apiVersion: v1
	I0116 02:57:07.360897  536361 command_runner.go:130] > data:
	I0116 02:57:07.360901  536361 command_runner.go:130] >   Corefile: |
	I0116 02:57:07.360905  536361 command_runner.go:130] >     .:53 {
	I0116 02:57:07.360909  536361 command_runner.go:130] >         errors
	I0116 02:57:07.360914  536361 command_runner.go:130] >         health {
	I0116 02:57:07.360918  536361 command_runner.go:130] >            lameduck 5s
	I0116 02:57:07.360929  536361 command_runner.go:130] >         }
	I0116 02:57:07.360934  536361 command_runner.go:130] >         ready
	I0116 02:57:07.360944  536361 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0116 02:57:07.360950  536361 command_runner.go:130] >            pods insecure
	I0116 02:57:07.360958  536361 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0116 02:57:07.360975  536361 command_runner.go:130] >            ttl 30
	I0116 02:57:07.360986  536361 command_runner.go:130] >         }
	I0116 02:57:07.360997  536361 command_runner.go:130] >         prometheus :9153
	I0116 02:57:07.361009  536361 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0116 02:57:07.361018  536361 command_runner.go:130] >            max_concurrent 1000
	I0116 02:57:07.361030  536361 command_runner.go:130] >         }
	I0116 02:57:07.361037  536361 command_runner.go:130] >         cache 30
	I0116 02:57:07.361041  536361 command_runner.go:130] >         loop
	I0116 02:57:07.361044  536361 command_runner.go:130] >         reload
	I0116 02:57:07.361051  536361 command_runner.go:130] >         loadbalance
	I0116 02:57:07.361054  536361 command_runner.go:130] >     }
	I0116 02:57:07.361058  536361 command_runner.go:130] > kind: ConfigMap
	I0116 02:57:07.361062  536361 command_runner.go:130] > metadata:
	I0116 02:57:07.361077  536361 command_runner.go:130] >   creationTimestamp: "2024-01-16T02:56:54Z"
	I0116 02:57:07.361087  536361 command_runner.go:130] >   name: coredns
	I0116 02:57:07.361096  536361 command_runner.go:130] >   namespace: kube-system
	I0116 02:57:07.361100  536361 command_runner.go:130] >   resourceVersion: "238"
	I0116 02:57:07.361107  536361 command_runner.go:130] >   uid: a48c41eb-69bc-41ef-ba30-7d28b046b680
	I0116 02:57:07.363564  536361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:57:07.502974  536361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:57:07.526349  536361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:57:07.781905  536361 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:57:07.781931  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:07.781943  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:07.781951  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:07.803800  536361 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0116 02:57:07.803830  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:07.803843  536361 round_trippers.go:580]     Audit-Id: 04465a3a-3590-4545-94d6-dc0bc007734c
	I0116 02:57:07.803853  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:07.803862  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:07.803872  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:07.803882  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:07.803890  536361 round_trippers.go:580]     Content-Length: 291
	I0116 02:57:07.803899  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:07 GMT
	I0116 02:57:07.804192  536361 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0402feb2-a751-4ef6-b708-443a517c68b1","resourceVersion":"370","creationTimestamp":"2024-01-16T02:56:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 02:57:07.804347  536361 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-061156" context rescaled to 1 replicas
	I0116 02:57:07.804383  536361 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:57:07.806607  536361 out.go:177] * Verifying Kubernetes components...
	I0116 02:57:07.808239  536361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:57:08.010178  536361 command_runner.go:130] > configmap/coredns replaced
	I0116 02:57:08.016283  536361 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0116 02:57:08.328012  536361 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0116 02:57:08.332986  536361 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0116 02:57:08.339433  536361 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 02:57:08.346289  536361 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 02:57:08.353429  536361 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0116 02:57:08.363875  536361 command_runner.go:130] > pod/storage-provisioner created
	I0116 02:57:08.400973  536361 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0116 02:57:08.401623  536361 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:57:08.401780  536361 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0116 02:57:08.401802  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:08.401814  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:08.401825  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:08.401950  536361 kapi.go:59] client config for multinode-061156: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.key", CAFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:57:08.402307  536361 node_ready.go:35] waiting up to 6m0s for node "multinode-061156" to be "Ready" ...
	I0116 02:57:08.402410  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:08.402417  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:08.402429  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:08.402437  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:08.404409  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:08.404430  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:08.404441  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:08.404450  536361 round_trippers.go:580]     Content-Length: 1273
	I0116 02:57:08.404462  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:08 GMT
	I0116 02:57:08.404474  536361 round_trippers.go:580]     Audit-Id: e0a051a1-64b1-4ca9-b81c-4223a7f1fe57
	I0116 02:57:08.404487  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:08.404499  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:08.404511  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:08.404551  536361 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"390"},"items":[{"metadata":{"name":"standard","uid":"45a4bd4e-9ffb-428a-b2f8-0b519b782f84","resourceVersion":"375","creationTimestamp":"2024-01-16T02:57:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:57:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0116 02:57:08.404704  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:08.404725  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:08.404734  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:08.404742  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:08.404752  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:08.404761  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:08 GMT
	I0116 02:57:08.404794  536361 round_trippers.go:580]     Audit-Id: 6313fab2-1b2e-4574-a6c8-0af9dc419501
	I0116 02:57:08.404807  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:08.405002  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:08.405052  536361 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"45a4bd4e-9ffb-428a-b2f8-0b519b782f84","resourceVersion":"375","creationTimestamp":"2024-01-16T02:57:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:57:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 02:57:08.405116  536361 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0116 02:57:08.405128  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:08.405139  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:08.405149  536361 round_trippers.go:473]     Content-Type: application/json
	I0116 02:57:08.405161  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:08.408507  536361 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:08.408529  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:08.408538  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:08.408547  536361 round_trippers.go:580]     Content-Length: 1220
	I0116 02:57:08.408556  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:08 GMT
	I0116 02:57:08.408565  536361 round_trippers.go:580]     Audit-Id: 42f2f575-c0a3-40b1-83ea-8d862fbea064
	I0116 02:57:08.408573  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:08.408581  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:08.408589  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:08.408618  536361 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"45a4bd4e-9ffb-428a-b2f8-0b519b782f84","resourceVersion":"375","creationTimestamp":"2024-01-16T02:57:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:57:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 02:57:08.410542  536361 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0116 02:57:08.411893  536361 addons.go:505] enable addons completed in 1.132092687s: enabled=[storage-provisioner default-storageclass]
	I0116 02:57:08.902701  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:08.902725  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:08.902734  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:08.902740  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:08.905283  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:08.905303  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:08.905310  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:08.905315  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:08 GMT
	I0116 02:57:08.905320  536361 round_trippers.go:580]     Audit-Id: bb9caa50-7aab-4633-8777-2d592287665a
	I0116 02:57:08.905326  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:08.905332  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:08.905338  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:08.905472  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:09.403155  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:09.403182  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:09.403190  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:09.403196  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:09.405602  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:09.405632  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:09.405644  536361 round_trippers.go:580]     Audit-Id: 4a81be41-9665-4c8b-b7d6-dd3d10de7454
	I0116 02:57:09.405652  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:09.405658  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:09.405668  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:09.405678  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:09.405683  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:09 GMT
	I0116 02:57:09.405808  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:09.903317  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:09.903343  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:09.903351  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:09.903358  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:09.905785  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:09.905806  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:09.905813  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:09.905818  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:09.905823  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:09.905828  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:09 GMT
	I0116 02:57:09.905833  536361 round_trippers.go:580]     Audit-Id: 6b3e8c58-1cff-48ee-9b3a-67c5ce58817e
	I0116 02:57:09.905838  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:09.906070  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:10.402731  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:10.402756  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:10.402765  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:10.402771  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:10.405149  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:10.405171  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:10.405181  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:10.405189  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:10.405197  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:10 GMT
	I0116 02:57:10.405205  536361 round_trippers.go:580]     Audit-Id: 3342b687-f0c2-4e67-8790-ad41a49b2aab
	I0116 02:57:10.405214  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:10.405226  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:10.405358  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:10.405823  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:10.902944  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:10.902966  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:10.902976  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:10.902982  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:10.905365  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:10.905385  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:10.905395  536361 round_trippers.go:580]     Audit-Id: 15d7cca0-f413-4992-aaf1-4cd7e6eda575
	I0116 02:57:10.905402  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:10.905409  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:10.905417  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:10.905424  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:10.905433  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:10 GMT
	I0116 02:57:10.905552  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:11.403194  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:11.403227  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:11.403240  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:11.403248  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:11.405609  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:11.405630  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:11.405638  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:11.405643  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:11.405650  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:11.405658  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:11 GMT
	I0116 02:57:11.405667  536361 round_trippers.go:580]     Audit-Id: 18d9ff2b-e5b9-4660-b89e-6558a4bc6a3a
	I0116 02:57:11.405676  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:11.405858  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:11.902834  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:11.902857  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:11.902865  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:11.902872  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:11.905225  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:11.905250  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:11.905260  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:11 GMT
	I0116 02:57:11.905266  536361 round_trippers.go:580]     Audit-Id: d357d4d8-01c4-4b6f-b2fa-53e0daba0e53
	I0116 02:57:11.905271  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:11.905277  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:11.905283  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:11.905290  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:11.905425  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:12.402991  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:12.403017  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:12.403025  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:12.403031  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:12.405409  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:12.405430  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:12.405441  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:12 GMT
	I0116 02:57:12.405451  536361 round_trippers.go:580]     Audit-Id: e57c2dbe-da35-4499-adbb-4b80c0c007a3
	I0116 02:57:12.405460  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:12.405469  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:12.405476  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:12.405482  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:12.405609  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:12.406039  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:12.903243  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:12.903265  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:12.903273  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:12.903279  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:12.905526  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:12.905549  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:12.905559  536361 round_trippers.go:580]     Audit-Id: 83639dd3-09eb-4265-9a1c-e1378fcca924
	I0116 02:57:12.905568  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:12.905576  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:12.905584  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:12.905592  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:12.905601  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:12 GMT
	I0116 02:57:12.905736  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:13.403454  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:13.403482  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:13.403490  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:13.403496  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:13.405881  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:13.405906  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:13.405919  536361 round_trippers.go:580]     Audit-Id: 32ace62a-009a-4bff-9333-602e63162465
	I0116 02:57:13.405928  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:13.405935  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:13.405942  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:13.405949  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:13.405956  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:13 GMT
	I0116 02:57:13.406085  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:13.902660  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:13.902686  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:13.902694  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:13.902700  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:13.905141  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:13.905162  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:13.905173  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:13.905180  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:13.905187  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:13.905194  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:13.905201  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:13 GMT
	I0116 02:57:13.905209  536361 round_trippers.go:580]     Audit-Id: 71288bb0-5dca-4670-af38-989670a194fb
	I0116 02:57:13.905317  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:14.402991  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:14.403020  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:14.403029  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:14.403035  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:14.405418  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:14.405441  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:14.405452  536361 round_trippers.go:580]     Audit-Id: 83747720-03ea-412f-af60-2adfb2e5a2d6
	I0116 02:57:14.405459  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:14.405466  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:14.405474  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:14.405481  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:14.405492  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:14 GMT
	I0116 02:57:14.405625  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:14.903229  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:14.903255  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:14.903264  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:14.903271  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:14.905571  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:14.905593  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:14.905602  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:14.905609  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:14.905616  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:14 GMT
	I0116 02:57:14.905623  536361 round_trippers.go:580]     Audit-Id: 609044e4-8c24-40f1-af4d-288fcfdfb563
	I0116 02:57:14.905630  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:14.905638  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:14.905784  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:14.906107  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:15.403428  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:15.403455  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:15.403463  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:15.403470  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:15.405745  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:15.405767  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:15.405777  536361 round_trippers.go:580]     Audit-Id: 38cfa15a-586a-4363-9067-343e4bd7af4f
	I0116 02:57:15.405785  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:15.405794  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:15.405800  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:15.405809  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:15.405816  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:15 GMT
	I0116 02:57:15.405947  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:15.902509  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:15.902539  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:15.902552  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:15.902562  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:15.905074  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:15.905093  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:15.905101  536361 round_trippers.go:580]     Audit-Id: 0db91d71-8ae4-4bc6-94ee-6eb4a284c829
	I0116 02:57:15.905106  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:15.905111  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:15.905116  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:15.905121  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:15.905126  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:15 GMT
	I0116 02:57:15.905272  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:16.402914  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:16.402940  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:16.402949  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:16.402955  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:16.405400  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:16.405420  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:16.405427  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:16.405433  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:16.405438  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:16 GMT
	I0116 02:57:16.405443  536361 round_trippers.go:580]     Audit-Id: 17c88984-615d-424a-8dbf-27a110c10689
	I0116 02:57:16.405448  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:16.405453  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:16.405588  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:16.903421  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:16.903448  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:16.903458  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:16.903464  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:16.905802  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:16.905828  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:16.905839  536361 round_trippers.go:580]     Audit-Id: 79e26b98-637a-49b1-900b-07b97b48f546
	I0116 02:57:16.905848  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:16.905856  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:16.905864  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:16.905873  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:16.905882  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:16 GMT
	I0116 02:57:16.906013  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:16.906445  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:17.402765  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:17.402786  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:17.402795  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:17.402801  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:17.405251  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:17.405274  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:17.405282  536361 round_trippers.go:580]     Audit-Id: 2fe5e1f6-9548-4126-a457-3dab02898f56
	I0116 02:57:17.405287  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:17.405293  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:17.405298  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:17.405305  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:17.405313  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:17 GMT
	I0116 02:57:17.405485  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:17.903153  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:17.903181  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:17.903192  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:17.903200  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:17.907208  536361 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:17.907238  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:17.907249  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:17.907259  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:17.907268  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:17.907278  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:17 GMT
	I0116 02:57:17.907287  536361 round_trippers.go:580]     Audit-Id: 3df1d5a5-7c98-46e6-b94e-c474b2fe5445
	I0116 02:57:17.907299  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:17.907471  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:18.403179  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:18.403209  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:18.403222  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:18.403233  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:18.405551  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:18.405571  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:18.405578  536361 round_trippers.go:580]     Audit-Id: 80260fe6-d59b-43d3-a19a-290d89ba55a5
	I0116 02:57:18.405584  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:18.405590  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:18.405595  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:18.405600  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:18.405608  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:18 GMT
	I0116 02:57:18.405755  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:18.903444  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:18.903469  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:18.903478  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:18.903483  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:18.905850  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:18.905877  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:18.905888  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:18.905896  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:18.905905  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:18.905914  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:18 GMT
	I0116 02:57:18.905921  536361 round_trippers.go:580]     Audit-Id: a740a527-4f67-4bc7-8aa9-653314823232
	I0116 02:57:18.905933  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:18.906055  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:19.402620  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:19.402644  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:19.402653  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:19.402659  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:19.404875  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:19.404897  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:19.404904  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:19.404910  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:19.404915  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:19.404920  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:19 GMT
	I0116 02:57:19.404925  536361 round_trippers.go:580]     Audit-Id: c7c7cb65-33f6-49ae-a0a0-e42d81ebedd8
	I0116 02:57:19.404930  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:19.405128  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:19.405442  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:19.902630  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:19.902654  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:19.902662  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:19.902669  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:19.904887  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:19.904908  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:19.904916  536361 round_trippers.go:580]     Audit-Id: b70e98fb-568a-4675-86f0-ff2660e6c2f5
	I0116 02:57:19.904928  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:19.904933  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:19.904938  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:19.904943  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:19.904948  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:19 GMT
	I0116 02:57:19.905055  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:20.402704  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:20.402737  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:20.402751  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:20.402760  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:20.404942  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:20.404963  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:20.404970  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:20.404976  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:20 GMT
	I0116 02:57:20.404981  536361 round_trippers.go:580]     Audit-Id: 41b6333e-3139-4c2e-8e08-b9a47f3a55ab
	I0116 02:57:20.404986  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:20.404991  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:20.404996  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:20.405157  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:20.902808  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:20.902832  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:20.902840  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:20.902847  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:20.905104  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:20.905126  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:20.905135  536361 round_trippers.go:580]     Audit-Id: e5909f16-913a-45be-96f9-c87165f96f26
	I0116 02:57:20.905143  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:20.905150  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:20.905156  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:20.905164  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:20.905172  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:20 GMT
	I0116 02:57:20.905304  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:21.402878  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:21.402904  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:21.402912  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:21.402919  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:21.405329  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:21.405355  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:21.405365  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:21.405373  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:21 GMT
	I0116 02:57:21.405380  536361 round_trippers.go:580]     Audit-Id: b35c78d5-57ba-40ce-a14a-35102a90f0a7
	I0116 02:57:21.405389  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:21.405398  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:21.405409  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:21.405531  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:21.405858  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:21.903502  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:21.903525  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:21.903534  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:21.903540  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:21.905935  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:21.905963  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:21.905973  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:21.905981  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:21.905989  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:21 GMT
	I0116 02:57:21.905997  536361 round_trippers.go:580]     Audit-Id: bd87c012-cf89-4d32-a13c-79b68d3b7dbc
	I0116 02:57:21.906005  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:21.906018  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:21.906168  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:22.402700  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:22.402725  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:22.402733  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:22.402740  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:22.405282  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:22.405302  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:22.405312  536361 round_trippers.go:580]     Audit-Id: c27c6435-fad0-4c81-8bce-ce5bbe5ac083
	I0116 02:57:22.405321  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:22.405329  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:22.405337  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:22.405350  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:22.405359  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:22 GMT
	I0116 02:57:22.405546  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:22.902663  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:22.902689  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:22.902697  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:22.902703  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:22.905060  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:22.905088  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:22.905100  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:22 GMT
	I0116 02:57:22.905108  536361 round_trippers.go:580]     Audit-Id: eda6242a-4cf7-498f-90cc-c2a88dae6058
	I0116 02:57:22.905125  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:22.905132  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:22.905140  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:22.905154  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:22.905287  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:23.402834  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:23.402862  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:23.402874  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:23.402883  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:23.405221  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:23.405246  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:23.405255  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:23.405261  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:23.405266  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:23 GMT
	I0116 02:57:23.405275  536361 round_trippers.go:580]     Audit-Id: 3f4f8057-07b0-447a-ad16-e8fcb4c6ee29
	I0116 02:57:23.405283  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:23.405291  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:23.405504  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:23.903145  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:23.903174  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:23.903188  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:23.903196  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:23.905580  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:23.905607  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:23.905617  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:23.905625  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:23.905634  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:23 GMT
	I0116 02:57:23.905642  536361 round_trippers.go:580]     Audit-Id: 10390096-63f3-4aaf-af96-42b5310606ab
	I0116 02:57:23.905650  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:23.905659  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:23.905775  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:23.906114  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:24.403456  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:24.403486  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:24.403501  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:24.403509  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:24.405889  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:24.405910  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:24.405917  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:24.405923  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:24.405928  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:24.405933  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:24.405965  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:24 GMT
	I0116 02:57:24.405975  536361 round_trippers.go:580]     Audit-Id: 088cce81-5740-4729-9bc0-d222e34d2f07
	I0116 02:57:24.406130  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:24.902654  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:24.902679  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:24.902688  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:24.902694  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:24.904892  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:24.904914  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:24.904921  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:24.904932  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:24.904938  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:24 GMT
	I0116 02:57:24.904946  536361 round_trippers.go:580]     Audit-Id: 53af818f-b1a9-4d06-917a-fb7fd94c01d7
	I0116 02:57:24.904954  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:24.904963  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:24.905127  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:25.402586  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:25.402612  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:25.402625  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:25.402636  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:25.405194  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:25.405216  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:25.405224  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:25.405229  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:25 GMT
	I0116 02:57:25.405234  536361 round_trippers.go:580]     Audit-Id: b1165b6f-4689-4df9-8edd-b02380765020
	I0116 02:57:25.405239  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:25.405244  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:25.405262  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:25.405382  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:25.902864  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:25.902896  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:25.902908  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:25.902936  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:25.905269  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:25.905290  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:25.905297  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:25.905303  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:25 GMT
	I0116 02:57:25.905308  536361 round_trippers.go:580]     Audit-Id: 500a275c-b021-46ea-ae20-11bf42e9e9fe
	I0116 02:57:25.905313  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:25.905318  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:25.905323  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:25.905442  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:26.403179  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:26.403212  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:26.403229  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:26.403239  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:26.405556  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:26.405584  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:26.405593  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:26.405603  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:26.405611  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:26.405625  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:26 GMT
	I0116 02:57:26.405637  536361 round_trippers.go:580]     Audit-Id: e2e90c53-1c04-413e-ab37-b1aacaa8459f
	I0116 02:57:26.405649  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:26.405801  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:26.406126  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:26.902516  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:26.902541  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:26.902549  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:26.902555  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:26.904814  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:26.904838  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:26.904847  536361 round_trippers.go:580]     Audit-Id: ab076a65-d07f-4c2d-a600-253721700d66
	I0116 02:57:26.904856  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:26.904863  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:26.904872  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:26.904885  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:26.904896  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:26 GMT
	I0116 02:57:26.905010  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:27.402700  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:27.402725  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:27.402734  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:27.402741  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:27.404947  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:27.404968  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:27.404975  536361 round_trippers.go:580]     Audit-Id: 00e3827d-d5f0-48a5-8eef-51c15cfafea3
	I0116 02:57:27.404981  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:27.404986  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:27.404993  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:27.405001  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:27.405009  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:27 GMT
	I0116 02:57:27.405176  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:27.902817  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:27.902841  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:27.902851  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:27.902867  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:27.905156  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:27.905183  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:27.905195  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:27.905204  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:27.905213  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:27 GMT
	I0116 02:57:27.905225  536361 round_trippers.go:580]     Audit-Id: e77ac83e-0067-45ca-aab1-13bd96a7f951
	I0116 02:57:27.905237  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:27.905243  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:27.905355  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:28.402919  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:28.402943  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:28.402951  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:28.402957  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:28.405260  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:28.405282  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:28.405292  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:28.405299  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:28.405307  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:28.405314  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:28 GMT
	I0116 02:57:28.405323  536361 round_trippers.go:580]     Audit-Id: 861dc015-689c-44ef-a678-b94695afd1c5
	I0116 02:57:28.405335  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:28.405474  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:28.903153  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:28.903186  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:28.903199  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:28.903209  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:28.905551  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:28.905575  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:28.905582  536361 round_trippers.go:580]     Audit-Id: 195e1e49-df7b-423a-a12b-f47980de2f74
	I0116 02:57:28.905591  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:28.905605  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:28.905612  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:28.905621  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:28.905630  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:28 GMT
	I0116 02:57:28.905769  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:28.906119  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:29.403466  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:29.403502  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:29.403511  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:29.403521  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:29.405793  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:29.405813  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:29.405820  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:29 GMT
	I0116 02:57:29.405826  536361 round_trippers.go:580]     Audit-Id: d3d07390-f5de-46f9-ac0a-c26924fc2b5a
	I0116 02:57:29.405831  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:29.405836  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:29.405841  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:29.405846  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:29.406087  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:29.902647  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:29.902673  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:29.902692  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:29.902701  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:29.905035  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:29.905057  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:29.905067  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:29 GMT
	I0116 02:57:29.905074  536361 round_trippers.go:580]     Audit-Id: 2672d658-973c-41a4-ac11-8d87fd15e759
	I0116 02:57:29.905082  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:29.905092  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:29.905100  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:29.905111  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:29.905252  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:30.402576  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:30.402606  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:30.402621  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:30.402629  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:30.404934  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:30.404957  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:30.404981  536361 round_trippers.go:580]     Audit-Id: 0fc02b1e-833f-43a4-a85e-8518282494fc
	I0116 02:57:30.404993  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:30.405000  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:30.405011  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:30.405021  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:30.405029  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:30 GMT
	I0116 02:57:30.405219  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:30.902584  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:30.902614  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:30.902628  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:30.902636  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:30.904911  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:30.904934  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:30.904941  536361 round_trippers.go:580]     Audit-Id: 32448b0c-3b64-4bfd-8e19-5784c06f310f
	I0116 02:57:30.904946  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:30.904951  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:30.904956  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:30.904964  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:30.904973  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:30 GMT
	I0116 02:57:30.905113  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:31.402693  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:31.402716  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:31.402724  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.402731  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.404821  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:31.404845  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:31.404855  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:31.404863  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:31.404871  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.404880  536361 round_trippers.go:580]     Audit-Id: 0ade99a9-0d04-42dc-a690-1baab6fef85c
	I0116 02:57:31.404892  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.404904  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.405037  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:31.405356  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:31.903009  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:31.903033  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:31.903041  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.903048  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.905584  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:31.905612  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:31.905632  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.905645  536361 round_trippers.go:580]     Audit-Id: 0ca0318b-d9fc-4a41-94af-f39c2929d85e
	I0116 02:57:31.905657  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.905669  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.905681  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:31.905694  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:31.905840  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:32.403303  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:32.403335  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:32.403346  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:32.403355  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:32.405546  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:32.405575  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:32.405585  536361 round_trippers.go:580]     Audit-Id: f2d49897-f7da-4f8f-b161-bbc55b33cc0b
	I0116 02:57:32.405596  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:32.405606  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:32.405614  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:32.405627  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:32.405638  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:32 GMT
	I0116 02:57:32.405743  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:32.903010  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:32.903037  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:32.903049  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:32.903057  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:32.905501  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:32.905531  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:32.905541  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:32 GMT
	I0116 02:57:32.905549  536361 round_trippers.go:580]     Audit-Id: 48a3fb34-9a15-4679-b2a7-c3b6131daac3
	I0116 02:57:32.905557  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:32.905565  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:32.905581  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:32.905589  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:32.905796  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:33.403328  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:33.403355  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:33.403363  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:33.403369  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:33.405668  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:33.405691  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:33.405700  536361 round_trippers.go:580]     Audit-Id: 3c200f3c-d428-4b2f-8b5d-20366a481688
	I0116 02:57:33.405707  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:33.405715  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:33.405723  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:33.405731  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:33.405741  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:33 GMT
	I0116 02:57:33.405893  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:33.406221  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:33.903505  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:33.903526  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:33.903535  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:33.903541  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:33.905691  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:33.905710  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:33.905717  536361 round_trippers.go:580]     Audit-Id: 8985d27d-8897-46ed-9b35-92a75f48fc32
	I0116 02:57:33.905725  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:33.905733  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:33.905741  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:33.905748  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:33.905757  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:33 GMT
	I0116 02:57:33.905897  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:34.402503  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:34.402528  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:34.402542  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:34.402548  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:34.404909  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:34.404935  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:34.404945  536361 round_trippers.go:580]     Audit-Id: 12e000d1-5aef-494c-aebd-775aff0624d2
	I0116 02:57:34.404973  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:34.404982  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:34.404995  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:34.405004  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:34.405014  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:34 GMT
	I0116 02:57:34.405143  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:34.902704  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:34.902735  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:34.902743  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:34.902749  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:34.904980  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:34.905001  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:34.905011  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:34 GMT
	I0116 02:57:34.905025  536361 round_trippers.go:580]     Audit-Id: cda0cecd-d150-451d-ad8c-535be87515a1
	I0116 02:57:34.905032  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:34.905040  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:34.905048  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:34.905063  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:34.905181  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:35.402755  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:35.402780  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:35.402789  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:35.402795  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:35.405109  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:35.405130  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:35.405137  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:35.405143  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:35 GMT
	I0116 02:57:35.405148  536361 round_trippers.go:580]     Audit-Id: 5df427e8-6a9a-45b3-b372-0d45022ad27a
	I0116 02:57:35.405153  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:35.405158  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:35.405163  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:35.405337  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:35.902609  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:35.902648  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:35.902657  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:35.902663  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:35.904943  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:35.904963  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:35.904970  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:35.904978  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:35 GMT
	I0116 02:57:35.904986  536361 round_trippers.go:580]     Audit-Id: 9c3b5c02-4e5f-4ae7-9546-13ac3e830f72
	I0116 02:57:35.904995  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:35.905004  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:35.905013  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:35.905167  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:35.905547  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:36.402575  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:36.402597  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:36.402608  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:36.402614  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:36.404913  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:36.404937  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:36.404947  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:36 GMT
	I0116 02:57:36.404956  536361 round_trippers.go:580]     Audit-Id: 5006c725-83f3-4a5e-8853-810d4af9e90c
	I0116 02:57:36.404964  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:36.404973  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:36.404985  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:36.404995  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:36.405130  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:36.903000  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:36.903026  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:36.903035  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:36.903041  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:36.905388  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:36.905411  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:36.905420  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:36 GMT
	I0116 02:57:36.905429  536361 round_trippers.go:580]     Audit-Id: fb462287-ad02-4999-a813-bfe69b90597c
	I0116 02:57:36.905436  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:36.905443  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:36.905451  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:36.905459  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:36.905603  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:37.403438  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:37.403459  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:37.403468  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:37.403474  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:37.405728  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:37.405750  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:37.405759  536361 round_trippers.go:580]     Audit-Id: 88322b33-4e17-43fb-80b4-5016e6ab1f58
	I0116 02:57:37.405767  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:37.405774  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:37.405781  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:37.405788  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:37.405796  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:37 GMT
	I0116 02:57:37.405979  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:37.902556  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:37.902583  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:37.902594  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:37.902602  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:37.904895  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:37.904921  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:37.904931  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:37.904939  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:37.904947  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:37.904955  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:37.904963  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:37 GMT
	I0116 02:57:37.904976  536361 round_trippers.go:580]     Audit-Id: 92275269-003c-4b31-b6cd-b4acae8dc52a
	I0116 02:57:37.905119  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:38.402701  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:38.402731  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:38.402744  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:38.402754  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:38.405155  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:38.405173  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:38.405180  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:38 GMT
	I0116 02:57:38.405238  536361 round_trippers.go:580]     Audit-Id: 7f15d5f4-396e-4664-b2b8-cfebb60d10eb
	I0116 02:57:38.405248  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:38.405253  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:38.405259  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:38.405267  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:38.405401  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"315","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0116 02:57:38.405756  536361 node_ready.go:58] node "multinode-061156" has status "Ready":"False"
	I0116 02:57:38.902939  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:38.902963  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:38.902971  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:38.902977  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:38.905283  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:38.905303  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:38.905311  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:38.905317  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:38 GMT
	I0116 02:57:38.905322  536361 round_trippers.go:580]     Audit-Id: f4c406bf-636a-4935-91d7-9a22f96872a1
	I0116 02:57:38.905330  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:38.905338  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:38.905346  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:38.905496  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:57:38.905814  536361 node_ready.go:49] node "multinode-061156" has status "Ready":"True"
	I0116 02:57:38.905830  536361 node_ready.go:38] duration metric: took 30.503494834s waiting for node "multinode-061156" to be "Ready" ...
	I0116 02:57:38.905839  536361 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:57:38.905905  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 02:57:38.905911  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:38.905919  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:38.905927  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:38.908797  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:38.908820  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:38.908830  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:38.908839  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:38.908847  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:38 GMT
	I0116 02:57:38.908855  536361 round_trippers.go:580]     Audit-Id: 27c4aad9-665d-4a89-bc64-bdc47479f3fd
	I0116 02:57:38.908869  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:38.908878  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:38.909443  536361 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4rrfv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d6092a0e-384a-4e9a-92b1-f5a394a2eb25","resourceVersion":"410","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7a1015df-c877-493c-bb76-694615980976","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a1015df-c877-493c-bb76-694615980976\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I0116 02:57:38.912474  536361 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4rrfv" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:38.912555  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4rrfv
	I0116 02:57:38.912564  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:38.912571  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:38.912577  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:38.914440  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:38.914455  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:38.914461  536361 round_trippers.go:580]     Audit-Id: 145ae355-7698-41e4-a5c0-f7f3f436dfb3
	I0116 02:57:38.914467  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:38.914471  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:38.914476  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:38.914481  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:38.914487  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:38 GMT
	I0116 02:57:38.914588  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4rrfv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d6092a0e-384a-4e9a-92b1-f5a394a2eb25","resourceVersion":"410","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7a1015df-c877-493c-bb76-694615980976","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a1015df-c877-493c-bb76-694615980976\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0116 02:57:38.914968  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:38.914982  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:38.914989  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:38.914995  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:38.916709  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:38.916730  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:38.916740  536361 round_trippers.go:580]     Audit-Id: a5e89f19-e70e-475a-8c42-10cf378f98bd
	I0116 02:57:38.916748  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:38.916758  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:38.916767  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:38.916782  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:38.916790  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:38 GMT
	I0116 02:57:38.916883  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:57:39.413496  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4rrfv
	I0116 02:57:39.413520  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:39.413533  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:39.413540  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:39.416451  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:39.416483  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:39.416493  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:39.416514  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:39.416522  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:39.416531  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:39 GMT
	I0116 02:57:39.416543  536361 round_trippers.go:580]     Audit-Id: a48ce96d-6e93-41df-a5cd-b825df8170f0
	I0116 02:57:39.416551  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:39.416681  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4rrfv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d6092a0e-384a-4e9a-92b1-f5a394a2eb25","resourceVersion":"420","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7a1015df-c877-493c-bb76-694615980976","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a1015df-c877-493c-bb76-694615980976\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0116 02:57:39.417323  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:39.417347  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:39.417357  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:39.417366  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:39.419427  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:39.419447  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:39.419456  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:39.419465  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:39.419473  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:39.419482  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:39.419487  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:39 GMT
	I0116 02:57:39.419493  536361 round_trippers.go:580]     Audit-Id: e76513c6-3c68-42fb-8d45-b2c3558140f1
	I0116 02:57:39.419624  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:57:39.913296  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4rrfv
	I0116 02:57:39.913324  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:39.913333  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:39.913339  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:39.915695  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:39.915714  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:39.915721  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:39 GMT
	I0116 02:57:39.915727  536361 round_trippers.go:580]     Audit-Id: 1d7dff3a-67aa-4678-93c5-a290537a0554
	I0116 02:57:39.915732  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:39.915737  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:39.915744  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:39.915749  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:39.915985  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4rrfv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d6092a0e-384a-4e9a-92b1-f5a394a2eb25","resourceVersion":"423","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7a1015df-c877-493c-bb76-694615980976","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a1015df-c877-493c-bb76-694615980976\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0116 02:57:39.916519  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:39.916536  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:39.916547  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:39.916554  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:39.918271  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:39.918287  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:39.918293  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:39.918299  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:39.918304  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:39.918309  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:39 GMT
	I0116 02:57:39.918314  536361 round_trippers.go:580]     Audit-Id: fb7193f1-79bc-4d01-9bbf-baf33f8824bd
	I0116 02:57:39.918319  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:39.918495  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:57:39.918955  536361 pod_ready.go:92] pod "coredns-5dd5756b68-4rrfv" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:39.918990  536361 pod_ready.go:81] duration metric: took 1.006483587s waiting for pod "coredns-5dd5756b68-4rrfv" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:39.919006  536361 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:39.919086  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-061156
	I0116 02:57:39.919101  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:39.919109  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:39.919120  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:39.920898  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:39.920918  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:39.920927  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:39.920935  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:39 GMT
	I0116 02:57:39.920942  536361 round_trippers.go:580]     Audit-Id: 96d78894-2fcc-472d-9538-4b4cf05d5f85
	I0116 02:57:39.920952  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:39.920961  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:39.920970  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:39.921135  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-061156","namespace":"kube-system","uid":"e49c4a4d-ee57-4241-b505-e98608e6ddbf","resourceVersion":"278","creationTimestamp":"2024-01-16T02:56:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"81687b1150086af7b3cfc80a39f848b7","kubernetes.io/config.mirror":"81687b1150086af7b3cfc80a39f848b7","kubernetes.io/config.seen":"2024-01-16T02:56:54.246239492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:56:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0116 02:57:39.921497  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:39.921510  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:39.921517  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:39.921523  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:39.923194  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:39.923209  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:39.923215  536361 round_trippers.go:580]     Audit-Id: 973cf554-e65f-434c-842a-93a4d7766e0d
	I0116 02:57:39.923221  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:39.923227  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:39.923234  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:39.923239  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:39.923245  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:39 GMT
	I0116 02:57:39.923342  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:57:39.923607  536361 pod_ready.go:92] pod "etcd-multinode-061156" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:39.923620  536361 pod_ready.go:81] duration metric: took 4.606959ms waiting for pod "etcd-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:39.923630  536361 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:39.923677  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-061156
	I0116 02:57:39.923685  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:39.923692  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:39.923697  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:39.925463  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:39.925484  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:39.925493  536361 round_trippers.go:580]     Audit-Id: 0cea42e2-2daa-4e05-ba9f-bd38beeae2dd
	I0116 02:57:39.925501  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:39.925509  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:39.925516  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:39.925527  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:39.925540  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:39 GMT
	I0116 02:57:39.925683  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-061156","namespace":"kube-system","uid":"da3c627a-b324-482c-8416-cea88abe00ae","resourceVersion":"281","creationTimestamp":"2024-01-16T02:56:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a713ba27efa0d680cee11acec275764d","kubernetes.io/config.mirror":"a713ba27efa0d680cee11acec275764d","kubernetes.io/config.seen":"2024-01-16T02:56:48.475217310Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:56:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0116 02:57:39.926101  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:39.926114  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:39.926121  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:39.926129  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:39.927673  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:39.927687  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:39.927693  536361 round_trippers.go:580]     Audit-Id: 3e894fb7-a0e3-4de1-a7d6-37ec58648af2
	I0116 02:57:39.927699  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:39.927706  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:39.927716  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:39.927724  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:39.927736  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:39 GMT
	I0116 02:57:39.927881  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:57:39.928153  536361 pod_ready.go:92] pod "kube-apiserver-multinode-061156" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:39.928167  536361 pod_ready.go:81] duration metric: took 4.531299ms waiting for pod "kube-apiserver-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:39.928176  536361 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:39.928222  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-061156
	I0116 02:57:39.928230  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:39.928236  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:39.928243  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:39.930002  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:39.930029  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:39.930038  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:39.930046  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:39.930055  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:39.930066  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:39 GMT
	I0116 02:57:39.930079  536361 round_trippers.go:580]     Audit-Id: ac280040-28e0-4113-8577-981c84dfe4ef
	I0116 02:57:39.930091  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:39.930222  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-061156","namespace":"kube-system","uid":"5b792e14-d13e-43b8-a708-f27c31290eda","resourceVersion":"395","creationTimestamp":"2024-01-16T02:56:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b32ca35dfee6bdabe276b7d93aa2f570","kubernetes.io/config.mirror":"b32ca35dfee6bdabe276b7d93aa2f570","kubernetes.io/config.seen":"2024-01-16T02:56:54.246247549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:56:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0116 02:57:39.930594  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:39.930606  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:39.930613  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:39.930619  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:39.932219  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:39.932245  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:39.932272  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:39.932285  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:39.932297  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:39.932309  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:39.932319  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:39 GMT
	I0116 02:57:39.932329  536361 round_trippers.go:580]     Audit-Id: 1e888877-89b8-4029-ba9a-854c73ddea97
	I0116 02:57:39.932419  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:57:39.932671  536361 pod_ready.go:92] pod "kube-controller-manager-multinode-061156" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:39.932692  536361 pod_ready.go:81] duration metric: took 4.510078ms waiting for pod "kube-controller-manager-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:39.932700  536361 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xsg8g" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:39.932741  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsg8g
	I0116 02:57:39.932748  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:39.932754  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:39.932759  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:39.934389  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:39.934404  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:39.934410  536361 round_trippers.go:580]     Audit-Id: b3436a23-5773-4649-8128-5f9d37a482af
	I0116 02:57:39.934416  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:39.934422  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:39.934429  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:39.934438  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:39.934447  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:39 GMT
	I0116 02:57:39.934618  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xsg8g","generateName":"kube-proxy-","namespace":"kube-system","uid":"0e531a4d-783f-4c65-9580-2b8e43a88adb","resourceVersion":"390","creationTimestamp":"2024-01-16T02:57:06Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d3fcd685-a8b7-4613-b0c1-a2055037991b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d3fcd685-a8b7-4613-b0c1-a2055037991b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0116 02:57:40.103265  536361 request.go:629] Waited for 168.311158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:40.103325  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:40.103330  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:40.103358  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:40.103372  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:40.105776  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:40.105798  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:40.105808  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:40.105815  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:40.105823  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:40.105834  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:40.105844  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:40 GMT
	I0116 02:57:40.105855  536361 round_trippers.go:580]     Audit-Id: 8f7d452f-f091-475e-8d09-f48db049658f
	I0116 02:57:40.106000  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:57:40.106345  536361 pod_ready.go:92] pod "kube-proxy-xsg8g" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:40.106364  536361 pod_ready.go:81] duration metric: took 173.658737ms waiting for pod "kube-proxy-xsg8g" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:40.106374  536361 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:40.303789  536361 request.go:629] Waited for 197.340446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-061156
	I0116 02:57:40.303852  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-061156
	I0116 02:57:40.303857  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:40.303865  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:40.303871  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:40.306404  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:40.306423  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:40.306430  536361 round_trippers.go:580]     Audit-Id: ffd14953-32f8-4e6d-9178-5465a77ffe8b
	I0116 02:57:40.306436  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:40.306440  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:40.306445  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:40.306450  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:40.306455  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:40 GMT
	I0116 02:57:40.306630  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-061156","namespace":"kube-system","uid":"9eee1777-3859-44ba-b059-2eb8b1aac78f","resourceVersion":"394","creationTimestamp":"2024-01-16T02:56:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bce42c0a4b713a3d4100ced7dd0a146a","kubernetes.io/config.mirror":"bce42c0a4b713a3d4100ced7dd0a146a","kubernetes.io/config.seen":"2024-01-16T02:56:54.246248691Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:56:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0116 02:57:40.503362  536361 request.go:629] Waited for 196.352066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:40.503440  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:57:40.503444  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:40.503452  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:40.503459  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:40.505743  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:40.505762  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:40.505769  536361 round_trippers.go:580]     Audit-Id: 12ed411d-64e2-4067-9af0-debee440a41b
	I0116 02:57:40.505775  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:40.505780  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:40.505785  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:40.505791  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:40.505796  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:40 GMT
	I0116 02:57:40.505962  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:57:40.506274  536361 pod_ready.go:92] pod "kube-scheduler-multinode-061156" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:40.506289  536361 pod_ready.go:81] duration metric: took 399.909666ms waiting for pod "kube-scheduler-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:40.506300  536361 pod_ready.go:38] duration metric: took 1.600447472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:57:40.506316  536361 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:57:40.506369  536361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:57:40.516864  536361 command_runner.go:130] > 1405
	I0116 02:57:40.516900  536361 api_server.go:72] duration metric: took 32.712486593s to wait for apiserver process to appear ...
	I0116 02:57:40.516909  536361 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:57:40.516926  536361 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0116 02:57:40.521670  536361 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0116 02:57:40.521730  536361 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0116 02:57:40.521738  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:40.521746  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:40.521754  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:40.522599  536361 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0116 02:57:40.522610  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:40.522615  536361 round_trippers.go:580]     Audit-Id: 995875ee-2688-426f-96d6-a9a0ef4cfa49
	I0116 02:57:40.522621  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:40.522626  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:40.522631  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:40.522635  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:40.522643  536361 round_trippers.go:580]     Content-Length: 264
	I0116 02:57:40.522647  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:40 GMT
	I0116 02:57:40.522661  536361 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 02:57:40.522768  536361 api_server.go:141] control plane version: v1.28.4
	I0116 02:57:40.522784  536361 api_server.go:131] duration metric: took 5.870881ms to wait for apiserver health ...
	I0116 02:57:40.522791  536361 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:57:40.703106  536361 request.go:629] Waited for 180.250978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 02:57:40.703198  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 02:57:40.703213  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:40.703225  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:40.703235  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:40.706500  536361 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:40.706527  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:40.706538  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:40.706547  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:40 GMT
	I0116 02:57:40.706557  536361 round_trippers.go:580]     Audit-Id: 2f56217f-907b-4139-9d54-18f371a3ea57
	I0116 02:57:40.706565  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:40.706574  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:40.706586  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:40.707058  536361 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4rrfv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d6092a0e-384a-4e9a-92b1-f5a394a2eb25","resourceVersion":"423","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7a1015df-c877-493c-bb76-694615980976","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a1015df-c877-493c-bb76-694615980976\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0116 02:57:40.708760  536361 system_pods.go:59] 8 kube-system pods found
	I0116 02:57:40.708783  536361 system_pods.go:61] "coredns-5dd5756b68-4rrfv" [d6092a0e-384a-4e9a-92b1-f5a394a2eb25] Running
	I0116 02:57:40.708788  536361 system_pods.go:61] "etcd-multinode-061156" [e49c4a4d-ee57-4241-b505-e98608e6ddbf] Running
	I0116 02:57:40.708792  536361 system_pods.go:61] "kindnet-86pdd" [73b1a04d-5339-4226-9d2b-5b574436acee] Running
	I0116 02:57:40.708796  536361 system_pods.go:61] "kube-apiserver-multinode-061156" [da3c627a-b324-482c-8416-cea88abe00ae] Running
	I0116 02:57:40.708803  536361 system_pods.go:61] "kube-controller-manager-multinode-061156" [5b792e14-d13e-43b8-a708-f27c31290eda] Running
	I0116 02:57:40.708808  536361 system_pods.go:61] "kube-proxy-xsg8g" [0e531a4d-783f-4c65-9580-2b8e43a88adb] Running
	I0116 02:57:40.708812  536361 system_pods.go:61] "kube-scheduler-multinode-061156" [9eee1777-3859-44ba-b059-2eb8b1aac78f] Running
	I0116 02:57:40.708816  536361 system_pods.go:61] "storage-provisioner" [5ada5003-e754-4457-91d0-cee0ba6b3640] Running
	I0116 02:57:40.708824  536361 system_pods.go:74] duration metric: took 186.026979ms to wait for pod list to return data ...
	I0116 02:57:40.708834  536361 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:57:40.903284  536361 request.go:629] Waited for 194.356457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:57:40.903359  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:57:40.903364  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:40.903372  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:40.903381  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:40.905760  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:40.905779  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:40.905791  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:40.905796  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:40.905808  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:40.905820  536361 round_trippers.go:580]     Content-Length: 261
	I0116 02:57:40.905831  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:40 GMT
	I0116 02:57:40.905842  536361 round_trippers.go:580]     Audit-Id: 6c5ec2ff-380b-40cc-a496-e7b03e6baa94
	I0116 02:57:40.905849  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:40.905873  536361 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5af0280c-b8ab-4775-a857-a9461a5d6f0d","resourceVersion":"331","creationTimestamp":"2024-01-16T02:57:06Z"}}]}
	I0116 02:57:40.906079  536361 default_sa.go:45] found service account: "default"
	I0116 02:57:40.906099  536361 default_sa.go:55] duration metric: took 197.256371ms for default service account to be created ...
	I0116 02:57:40.906111  536361 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:57:41.103229  536361 request.go:629] Waited for 197.026894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 02:57:41.103333  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 02:57:41.103343  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:41.103351  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:41.103358  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:41.106561  536361 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:41.106590  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:41.106598  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:41.106605  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:41.106610  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:41 GMT
	I0116 02:57:41.106615  536361 round_trippers.go:580]     Audit-Id: e0a472b8-4c8c-4d6a-95d1-6dc31581b01a
	I0116 02:57:41.106620  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:41.106626  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:41.107117  536361 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4rrfv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d6092a0e-384a-4e9a-92b1-f5a394a2eb25","resourceVersion":"423","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7a1015df-c877-493c-bb76-694615980976","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a1015df-c877-493c-bb76-694615980976\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0116 02:57:41.108824  536361 system_pods.go:86] 8 kube-system pods found
	I0116 02:57:41.108846  536361 system_pods.go:89] "coredns-5dd5756b68-4rrfv" [d6092a0e-384a-4e9a-92b1-f5a394a2eb25] Running
	I0116 02:57:41.108851  536361 system_pods.go:89] "etcd-multinode-061156" [e49c4a4d-ee57-4241-b505-e98608e6ddbf] Running
	I0116 02:57:41.108855  536361 system_pods.go:89] "kindnet-86pdd" [73b1a04d-5339-4226-9d2b-5b574436acee] Running
	I0116 02:57:41.108859  536361 system_pods.go:89] "kube-apiserver-multinode-061156" [da3c627a-b324-482c-8416-cea88abe00ae] Running
	I0116 02:57:41.108868  536361 system_pods.go:89] "kube-controller-manager-multinode-061156" [5b792e14-d13e-43b8-a708-f27c31290eda] Running
	I0116 02:57:41.108876  536361 system_pods.go:89] "kube-proxy-xsg8g" [0e531a4d-783f-4c65-9580-2b8e43a88adb] Running
	I0116 02:57:41.108881  536361 system_pods.go:89] "kube-scheduler-multinode-061156" [9eee1777-3859-44ba-b059-2eb8b1aac78f] Running
	I0116 02:57:41.108884  536361 system_pods.go:89] "storage-provisioner" [5ada5003-e754-4457-91d0-cee0ba6b3640] Running
	I0116 02:57:41.108892  536361 system_pods.go:126] duration metric: took 202.771525ms to wait for k8s-apps to be running ...
	I0116 02:57:41.108902  536361 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:57:41.108956  536361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:57:41.119565  536361 system_svc.go:56] duration metric: took 10.647596ms WaitForService to wait for kubelet.
	I0116 02:57:41.119591  536361 kubeadm.go:581] duration metric: took 33.315176177s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:57:41.119611  536361 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:57:41.302995  536361 request.go:629] Waited for 183.29252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0116 02:57:41.303086  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0116 02:57:41.303094  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:41.303109  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:41.303126  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:41.305549  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:41.305584  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:41.305596  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:41.305605  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:41.305614  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:41.305622  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:41 GMT
	I0116 02:57:41.305630  536361 round_trippers.go:580]     Audit-Id: 8881d17d-d7a4-48a4-8d1a-d146a44025a3
	I0116 02:57:41.305636  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:41.305806  536361 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0116 02:57:41.306283  536361 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0116 02:57:41.306311  536361 node_conditions.go:123] node cpu capacity is 8
	I0116 02:57:41.306325  536361 node_conditions.go:105] duration metric: took 186.708717ms to run NodePressure ...
	I0116 02:57:41.306342  536361 start.go:228] waiting for startup goroutines ...
	I0116 02:57:41.306355  536361 start.go:233] waiting for cluster config update ...
	I0116 02:57:41.306372  536361 start.go:242] writing updated cluster config ...
	I0116 02:57:41.308785  536361 out.go:177] 
	I0116 02:57:41.310165  536361 config.go:182] Loaded profile config "multinode-061156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:57:41.310226  536361 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/config.json ...
	I0116 02:57:41.312036  536361 out.go:177] * Starting worker node multinode-061156-m02 in cluster multinode-061156
	I0116 02:57:41.313916  536361 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 02:57:41.315514  536361 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:57:41.316926  536361 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:57:41.316948  536361 cache.go:56] Caching tarball of preloaded images
	I0116 02:57:41.316955  536361 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:57:41.317034  536361 preload.go:174] Found /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:57:41.317045  536361 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:57:41.317119  536361 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/config.json ...
	I0116 02:57:41.332864  536361 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 02:57:41.332889  536361 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 02:57:41.332915  536361 cache.go:194] Successfully downloaded all kic artifacts
	I0116 02:57:41.332954  536361 start.go:365] acquiring machines lock for multinode-061156-m02: {Name:mkfd843bca6e720aed4ea4923b6ca5a9235272eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:57:41.333065  536361 start.go:369] acquired machines lock for "multinode-061156-m02" in 88.12µs
	I0116 02:57:41.333091  536361 start.go:93] Provisioning new machine with config: &{Name:multinode-061156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-061156 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:57:41.333164  536361 start.go:125] createHost starting for "m02" (driver="docker")
	I0116 02:57:41.336059  536361 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0116 02:57:41.336174  536361 start.go:159] libmachine.API.Create for "multinode-061156" (driver="docker")
	I0116 02:57:41.336201  536361 client.go:168] LocalClient.Create starting
	I0116 02:57:41.336294  536361 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem
	I0116 02:57:41.336328  536361 main.go:141] libmachine: Decoding PEM data...
	I0116 02:57:41.336346  536361 main.go:141] libmachine: Parsing certificate...
	I0116 02:57:41.336403  536361 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem
	I0116 02:57:41.336422  536361 main.go:141] libmachine: Decoding PEM data...
	I0116 02:57:41.336434  536361 main.go:141] libmachine: Parsing certificate...
	I0116 02:57:41.336620  536361 cli_runner.go:164] Run: docker network inspect multinode-061156 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:57:41.352743  536361 network_create.go:77] Found existing network {name:multinode-061156 subnet:0xc002e4c060 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0116 02:57:41.352792  536361 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-061156-m02" container
	I0116 02:57:41.352865  536361 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 02:57:41.368066  536361 cli_runner.go:164] Run: docker volume create multinode-061156-m02 --label name.minikube.sigs.k8s.io=multinode-061156-m02 --label created_by.minikube.sigs.k8s.io=true
	I0116 02:57:41.383968  536361 oci.go:103] Successfully created a docker volume multinode-061156-m02
	I0116 02:57:41.384046  536361 cli_runner.go:164] Run: docker run --rm --name multinode-061156-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-061156-m02 --entrypoint /usr/bin/test -v multinode-061156-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 02:57:41.862109  536361 oci.go:107] Successfully prepared a docker volume multinode-061156-m02
	I0116 02:57:41.862159  536361 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:57:41.862183  536361 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 02:57:41.862239  536361 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-061156-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 02:57:46.914019  536361 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-061156-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.051726606s)
	I0116 02:57:46.914059  536361 kic.go:203] duration metric: took 5.051867 seconds to extract preloaded images to volume
	W0116 02:57:46.914187  536361 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 02:57:46.914267  536361 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 02:57:46.965316  536361 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-061156-m02 --name multinode-061156-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-061156-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-061156-m02 --network multinode-061156 --ip 192.168.58.3 --volume multinode-061156-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 02:57:47.234424  536361 cli_runner.go:164] Run: docker container inspect multinode-061156-m02 --format={{.State.Running}}
	I0116 02:57:47.251680  536361 cli_runner.go:164] Run: docker container inspect multinode-061156-m02 --format={{.State.Status}}
	I0116 02:57:47.269114  536361 cli_runner.go:164] Run: docker exec multinode-061156-m02 stat /var/lib/dpkg/alternatives/iptables
	I0116 02:57:47.307616  536361 oci.go:144] the created container "multinode-061156-m02" has a running status.
	I0116 02:57:47.307661  536361 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156-m02/id_rsa...
	I0116 02:57:47.431445  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0116 02:57:47.431485  536361 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 02:57:47.451359  536361 cli_runner.go:164] Run: docker container inspect multinode-061156-m02 --format={{.State.Status}}
	I0116 02:57:47.473144  536361 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 02:57:47.473173  536361 kic_runner.go:114] Args: [docker exec --privileged multinode-061156-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 02:57:47.525419  536361 cli_runner.go:164] Run: docker container inspect multinode-061156-m02 --format={{.State.Status}}
	I0116 02:57:47.543003  536361 machine.go:88] provisioning docker machine ...
	I0116 02:57:47.543068  536361 ubuntu.go:169] provisioning hostname "multinode-061156-m02"
	I0116 02:57:47.543141  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156-m02
	I0116 02:57:47.563938  536361 main.go:141] libmachine: Using SSH client type: native
	I0116 02:57:47.564484  536361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33287 <nil> <nil>}
	I0116 02:57:47.564513  536361 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-061156-m02 && echo "multinode-061156-m02" | sudo tee /etc/hostname
	I0116 02:57:47.565253  536361 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55430->127.0.0.1:33287: read: connection reset by peer
	I0116 02:57:50.707043  536361 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-061156-m02
	
	I0116 02:57:50.707131  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156-m02
	I0116 02:57:50.723857  536361 main.go:141] libmachine: Using SSH client type: native
	I0116 02:57:50.724187  536361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33287 <nil> <nil>}
	I0116 02:57:50.724205  536361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-061156-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-061156-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-061156-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:57:50.856372  536361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:57:50.856412  536361 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17965-443749/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-443749/.minikube}
	I0116 02:57:50.856441  536361 ubuntu.go:177] setting up certificates
	I0116 02:57:50.856460  536361 provision.go:83] configureAuth start
	I0116 02:57:50.856536  536361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-061156-m02
	I0116 02:57:50.872555  536361 provision.go:138] copyHostCerts
	I0116 02:57:50.872605  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem
	I0116 02:57:50.872639  536361 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem, removing ...
	I0116 02:57:50.872649  536361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem
	I0116 02:57:50.872726  536361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/ca.pem (1078 bytes)
	I0116 02:57:50.872811  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem
	I0116 02:57:50.872835  536361 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem, removing ...
	I0116 02:57:50.872842  536361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem
	I0116 02:57:50.872874  536361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/cert.pem (1123 bytes)
	I0116 02:57:50.872946  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem
	I0116 02:57:50.872974  536361 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem, removing ...
	I0116 02:57:50.872983  536361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem
	I0116 02:57:50.873016  536361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-443749/.minikube/key.pem (1675 bytes)
	I0116 02:57:50.873078  536361 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem org=jenkins.multinode-061156-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-061156-m02]
	I0116 02:57:50.983116  536361 provision.go:172] copyRemoteCerts
	I0116 02:57:50.983175  536361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:57:50.983212  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156-m02
	I0116 02:57:50.999632  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33287 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156-m02/id_rsa Username:docker}
	I0116 02:57:51.092654  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:57:51.092723  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 02:57:51.114097  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:57:51.114166  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 02:57:51.135821  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:57:51.135883  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 02:57:51.157420  536361 provision.go:86] duration metric: configureAuth took 300.939912ms
	I0116 02:57:51.157453  536361 ubuntu.go:193] setting minikube options for container-runtime
	I0116 02:57:51.157664  536361 config.go:182] Loaded profile config "multinode-061156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:57:51.157789  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156-m02
	I0116 02:57:51.173438  536361 main.go:141] libmachine: Using SSH client type: native
	I0116 02:57:51.173768  536361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33287 <nil> <nil>}
	I0116 02:57:51.173784  536361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:57:51.395653  536361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:57:51.395682  536361 machine.go:91] provisioned docker machine in 3.852651572s
	I0116 02:57:51.395694  536361 client.go:171] LocalClient.Create took 10.059487125s
	I0116 02:57:51.395717  536361 start.go:167] duration metric: libmachine.API.Create for "multinode-061156" took 10.059542877s
	I0116 02:57:51.395728  536361 start.go:300] post-start starting for "multinode-061156-m02" (driver="docker")
	I0116 02:57:51.395751  536361 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:57:51.395818  536361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:57:51.395872  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156-m02
	I0116 02:57:51.411637  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33287 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156-m02/id_rsa Username:docker}
	I0116 02:57:51.513036  536361 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:57:51.515914  536361 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0116 02:57:51.515940  536361 command_runner.go:130] > NAME="Ubuntu"
	I0116 02:57:51.515949  536361 command_runner.go:130] > VERSION_ID="22.04"
	I0116 02:57:51.515956  536361 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0116 02:57:51.515964  536361 command_runner.go:130] > VERSION_CODENAME=jammy
	I0116 02:57:51.515975  536361 command_runner.go:130] > ID=ubuntu
	I0116 02:57:51.515983  536361 command_runner.go:130] > ID_LIKE=debian
	I0116 02:57:51.515994  536361 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0116 02:57:51.516002  536361 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0116 02:57:51.516017  536361 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0116 02:57:51.516043  536361 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0116 02:57:51.516054  536361 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0116 02:57:51.516127  536361 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 02:57:51.516164  536361 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 02:57:51.516182  536361 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 02:57:51.516195  536361 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 02:57:51.516213  536361 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-443749/.minikube/addons for local assets ...
	I0116 02:57:51.516289  536361 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-443749/.minikube/files for local assets ...
	I0116 02:57:51.516386  536361 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem -> 4505732.pem in /etc/ssl/certs
	I0116 02:57:51.516397  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem -> /etc/ssl/certs/4505732.pem
	I0116 02:57:51.516506  536361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:57:51.524068  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem --> /etc/ssl/certs/4505732.pem (1708 bytes)
	I0116 02:57:51.546167  536361 start.go:303] post-start completed in 150.426216ms
	I0116 02:57:51.546471  536361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-061156-m02
	I0116 02:57:51.563287  536361 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/config.json ...
	I0116 02:57:51.563555  536361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:57:51.563607  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156-m02
	I0116 02:57:51.580659  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33287 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156-m02/id_rsa Username:docker}
	I0116 02:57:51.673461  536361 command_runner.go:130] > 27%!
	(MISSING)I0116 02:57:51.673532  536361 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 02:57:51.677846  536361 command_runner.go:130] > 215G
	I0116 02:57:51.677881  536361 start.go:128] duration metric: createHost completed in 10.344702177s
	I0116 02:57:51.677893  536361 start.go:83] releasing machines lock for "multinode-061156-m02", held for 10.34481521s
	I0116 02:57:51.677955  536361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-061156-m02
	I0116 02:57:51.695004  536361 out.go:177] * Found network options:
	I0116 02:57:51.696405  536361 out.go:177]   - NO_PROXY=192.168.58.2
	W0116 02:57:51.697581  536361 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 02:57:51.697651  536361 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 02:57:51.697762  536361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:57:51.697789  536361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:57:51.697812  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156-m02
	I0116 02:57:51.697861  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156-m02
	I0116 02:57:51.714880  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33287 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156-m02/id_rsa Username:docker}
	I0116 02:57:51.715995  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33287 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156-m02/id_rsa Username:docker}
	I0116 02:57:51.937165  536361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:57:51.937179  536361 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 02:57:51.941106  536361 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0116 02:57:51.941125  536361 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0116 02:57:51.941132  536361 command_runner.go:130] > Device: b0h/176d	Inode: 1043901     Links: 1
	I0116 02:57:51.941138  536361 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:57:51.941144  536361 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0116 02:57:51.941148  536361 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0116 02:57:51.941156  536361 command_runner.go:130] > Change: 2024-01-16 02:37:07.766569517 +0000
	I0116 02:57:51.941164  536361 command_runner.go:130] >  Birth: 2024-01-16 02:37:07.766569517 +0000
	I0116 02:57:51.941411  536361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:57:51.958769  536361 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 02:57:51.958845  536361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:57:51.984847  536361 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0116 02:57:51.984921  536361 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 02:57:51.984952  536361 start.go:475] detecting cgroup driver to use...
	I0116 02:57:51.984987  536361 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 02:57:51.985044  536361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:57:51.998118  536361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:57:52.007980  536361 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:57:52.008039  536361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:57:52.022951  536361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:57:52.035657  536361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:57:52.112685  536361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:57:52.125895  536361 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 02:57:52.192518  536361 docker.go:233] disabling docker service ...
	I0116 02:57:52.192594  536361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:57:52.209505  536361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:57:52.219618  536361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:57:52.230379  536361 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 02:57:52.296681  536361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:57:52.382529  536361 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 02:57:52.382614  536361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:57:52.392791  536361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:57:52.406510  536361 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 02:57:52.407354  536361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:57:52.407431  536361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:57:52.416131  536361 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:57:52.416182  536361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:57:52.424900  536361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:57:52.433867  536361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:57:52.442430  536361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:57:52.450832  536361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:57:52.457602  536361 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 02:57:52.458294  536361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:57:52.465525  536361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:57:52.539658  536361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:57:52.652307  536361 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:57:52.652382  536361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:57:52.655684  536361 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 02:57:52.655712  536361 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 02:57:52.655722  536361 command_runner.go:130] > Device: b9h/185d	Inode: 186         Links: 1
	I0116 02:57:52.655730  536361 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:57:52.655735  536361 command_runner.go:130] > Access: 2024-01-16 02:57:52.639572217 +0000
	I0116 02:57:52.655745  536361 command_runner.go:130] > Modify: 2024-01-16 02:57:52.639572217 +0000
	I0116 02:57:52.655753  536361 command_runner.go:130] > Change: 2024-01-16 02:57:52.639572217 +0000
	I0116 02:57:52.655758  536361 command_runner.go:130] >  Birth: -
	I0116 02:57:52.655810  536361 start.go:543] Will wait 60s for crictl version
	I0116 02:57:52.655849  536361 ssh_runner.go:195] Run: which crictl
	I0116 02:57:52.658825  536361 command_runner.go:130] > /usr/bin/crictl
	I0116 02:57:52.658886  536361 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:57:52.690310  536361 command_runner.go:130] > Version:  0.1.0
	I0116 02:57:52.690331  536361 command_runner.go:130] > RuntimeName:  cri-o
	I0116 02:57:52.690335  536361 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0116 02:57:52.690340  536361 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 02:57:52.690350  536361 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 02:57:52.690398  536361 ssh_runner.go:195] Run: crio --version
	I0116 02:57:52.722574  536361 command_runner.go:130] > crio version 1.24.6
	I0116 02:57:52.722598  536361 command_runner.go:130] > Version:          1.24.6
	I0116 02:57:52.722609  536361 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 02:57:52.722615  536361 command_runner.go:130] > GitTreeState:     clean
	I0116 02:57:52.722623  536361 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 02:57:52.722630  536361 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 02:57:52.722636  536361 command_runner.go:130] > Compiler:         gc
	I0116 02:57:52.722643  536361 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:57:52.722661  536361 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:57:52.722678  536361 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:57:52.722689  536361 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:57:52.722729  536361 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:57:52.724288  536361 ssh_runner.go:195] Run: crio --version
	I0116 02:57:52.758468  536361 command_runner.go:130] > crio version 1.24.6
	I0116 02:57:52.758489  536361 command_runner.go:130] > Version:          1.24.6
	I0116 02:57:52.758496  536361 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 02:57:52.758500  536361 command_runner.go:130] > GitTreeState:     clean
	I0116 02:57:52.758507  536361 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 02:57:52.758511  536361 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 02:57:52.758519  536361 command_runner.go:130] > Compiler:         gc
	I0116 02:57:52.758523  536361 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:57:52.758531  536361 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:57:52.758539  536361 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:57:52.758545  536361 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:57:52.758550  536361 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:57:52.760510  536361 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0116 02:57:52.761962  536361 out.go:177]   - env NO_PROXY=192.168.58.2
	I0116 02:57:52.763231  536361 cli_runner.go:164] Run: docker network inspect multinode-061156 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:57:52.778937  536361 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0116 02:57:52.782656  536361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:57:52.792669  536361 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156 for IP: 192.168.58.3
	I0116 02:57:52.792696  536361 certs.go:190] acquiring lock for shared ca certs: {Name:mk8883b8c07de4938a73ea389443b00589415803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:57:52.792849  536361 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key
	I0116 02:57:52.792886  536361 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key
	I0116 02:57:52.792907  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:57:52.792922  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:57:52.792936  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:57:52.792948  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:57:52.792999  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573.pem (1338 bytes)
	W0116 02:57:52.793028  536361 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573_empty.pem, impossibly tiny 0 bytes
	I0116 02:57:52.793035  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 02:57:52.793062  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/ca.pem (1078 bytes)
	I0116 02:57:52.793087  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:57:52.793160  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/home/jenkins/minikube-integration/17965-443749/.minikube/certs/key.pem (1675 bytes)
	I0116 02:57:52.793212  536361 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem (1708 bytes)
	I0116 02:57:52.793237  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem -> /usr/share/ca-certificates/4505732.pem
	I0116 02:57:52.793250  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:57:52.793261  536361 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573.pem -> /usr/share/ca-certificates/450573.pem
	I0116 02:57:52.793603  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:57:52.814432  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:57:52.835476  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:57:52.856086  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:57:52.876466  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/ssl/certs/4505732.pem --> /usr/share/ca-certificates/4505732.pem (1708 bytes)
	I0116 02:57:52.897407  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:57:52.918873  536361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-443749/.minikube/certs/450573.pem --> /usr/share/ca-certificates/450573.pem (1338 bytes)
	I0116 02:57:52.940757  536361 ssh_runner.go:195] Run: openssl version
	I0116 02:57:52.945620  536361 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0116 02:57:52.945696  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4505732.pem && ln -fs /usr/share/ca-certificates/4505732.pem /etc/ssl/certs/4505732.pem"
	I0116 02:57:52.954120  536361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4505732.pem
	I0116 02:57:52.957249  536361 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:43 /usr/share/ca-certificates/4505732.pem
	I0116 02:57:52.957281  536361 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:43 /usr/share/ca-certificates/4505732.pem
	I0116 02:57:52.957329  536361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4505732.pem
	I0116 02:57:52.963119  536361 command_runner.go:130] > 3ec20f2e
	I0116 02:57:52.963400  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4505732.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:57:52.971596  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:57:52.979847  536361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:57:52.982818  536361 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:57:52.982846  536361 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:57:52.982889  536361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:57:52.988742  536361 command_runner.go:130] > b5213941
	I0116 02:57:52.989061  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:57:52.996974  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/450573.pem && ln -fs /usr/share/ca-certificates/450573.pem /etc/ssl/certs/450573.pem"
	I0116 02:57:53.005169  536361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/450573.pem
	I0116 02:57:53.008092  536361 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:43 /usr/share/ca-certificates/450573.pem
	I0116 02:57:53.008149  536361 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:43 /usr/share/ca-certificates/450573.pem
	I0116 02:57:53.008191  536361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/450573.pem
	I0116 02:57:53.014030  536361 command_runner.go:130] > 51391683
	I0116 02:57:53.014284  536361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/450573.pem /etc/ssl/certs/51391683.0"
	I0116 02:57:53.022456  536361 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:57:53.025260  536361 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:57:53.025296  536361 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:57:53.025360  536361 ssh_runner.go:195] Run: crio config
	I0116 02:57:53.060408  536361 command_runner.go:130] ! time="2024-01-16 02:57:53.060083325Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0116 02:57:53.060433  536361 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 02:57:53.065534  536361 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 02:57:53.065556  536361 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 02:57:53.065562  536361 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 02:57:53.065566  536361 command_runner.go:130] > #
	I0116 02:57:53.065573  536361 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 02:57:53.065578  536361 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 02:57:53.065584  536361 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 02:57:53.065591  536361 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 02:57:53.065595  536361 command_runner.go:130] > # reload'.
	I0116 02:57:53.065605  536361 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 02:57:53.065614  536361 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 02:57:53.065621  536361 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 02:57:53.065629  536361 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 02:57:53.065632  536361 command_runner.go:130] > [crio]
	I0116 02:57:53.065640  536361 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 02:57:53.065645  536361 command_runner.go:130] > # containers images, in this directory.
	I0116 02:57:53.065656  536361 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0116 02:57:53.065663  536361 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 02:57:53.065668  536361 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0116 02:57:53.065677  536361 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 02:57:53.065685  536361 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 02:57:53.065692  536361 command_runner.go:130] > # storage_driver = "vfs"
	I0116 02:57:53.065698  536361 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 02:57:53.065706  536361 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 02:57:53.065712  536361 command_runner.go:130] > # storage_option = [
	I0116 02:57:53.065716  536361 command_runner.go:130] > # ]
	I0116 02:57:53.065725  536361 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 02:57:53.065735  536361 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 02:57:53.065742  536361 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 02:57:53.065752  536361 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 02:57:53.065758  536361 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 02:57:53.065765  536361 command_runner.go:130] > # always happen on a node reboot
	I0116 02:57:53.065770  536361 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 02:57:53.065778  536361 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 02:57:53.065786  536361 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 02:57:53.065796  536361 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 02:57:53.065803  536361 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 02:57:53.065811  536361 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 02:57:53.065821  536361 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 02:57:53.065827  536361 command_runner.go:130] > # internal_wipe = true
	I0116 02:57:53.065832  536361 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 02:57:53.065840  536361 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 02:57:53.065848  536361 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 02:57:53.065856  536361 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 02:57:53.065862  536361 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 02:57:53.065871  536361 command_runner.go:130] > [crio.api]
	I0116 02:57:53.065878  536361 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 02:57:53.065883  536361 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 02:57:53.065890  536361 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 02:57:53.065894  536361 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 02:57:53.065903  536361 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 02:57:53.065910  536361 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 02:57:53.065917  536361 command_runner.go:130] > # stream_port = "0"
	I0116 02:57:53.065923  536361 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 02:57:53.065929  536361 command_runner.go:130] > # stream_enable_tls = false
	I0116 02:57:53.065935  536361 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 02:57:53.065941  536361 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 02:57:53.065947  536361 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 02:57:53.065956  536361 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 02:57:53.065961  536361 command_runner.go:130] > # minutes.
	I0116 02:57:53.065966  536361 command_runner.go:130] > # stream_tls_cert = ""
	I0116 02:57:53.065974  536361 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 02:57:53.065987  536361 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 02:57:53.065996  536361 command_runner.go:130] > # stream_tls_key = ""
	I0116 02:57:53.066004  536361 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 02:57:53.066013  536361 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 02:57:53.066020  536361 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 02:57:53.066027  536361 command_runner.go:130] > # stream_tls_ca = ""
	I0116 02:57:53.066034  536361 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:57:53.066041  536361 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0116 02:57:53.066048  536361 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:57:53.066054  536361 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0116 02:57:53.066075  536361 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 02:57:53.066082  536361 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 02:57:53.066087  536361 command_runner.go:130] > [crio.runtime]
	I0116 02:57:53.066093  536361 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 02:57:53.066101  536361 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 02:57:53.066105  536361 command_runner.go:130] > # "nofile=1024:2048"
	I0116 02:57:53.066111  536361 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 02:57:53.066118  536361 command_runner.go:130] > # default_ulimits = [
	I0116 02:57:53.066122  536361 command_runner.go:130] > # ]
	I0116 02:57:53.066133  536361 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 02:57:53.066139  536361 command_runner.go:130] > # no_pivot = false
	I0116 02:57:53.066145  536361 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 02:57:53.066153  536361 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 02:57:53.066158  536361 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 02:57:53.066166  536361 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 02:57:53.066173  536361 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 02:57:53.066179  536361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:57:53.066185  536361 command_runner.go:130] > # conmon = ""
	I0116 02:57:53.066190  536361 command_runner.go:130] > # Cgroup setting for conmon
	I0116 02:57:53.066198  536361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 02:57:53.066204  536361 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 02:57:53.066210  536361 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 02:57:53.066218  536361 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 02:57:53.066227  536361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:57:53.066233  536361 command_runner.go:130] > # conmon_env = [
	I0116 02:57:53.066237  536361 command_runner.go:130] > # ]
	I0116 02:57:53.066244  536361 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 02:57:53.066251  536361 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 02:57:53.066259  536361 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 02:57:53.066266  536361 command_runner.go:130] > # default_env = [
	I0116 02:57:53.066269  536361 command_runner.go:130] > # ]
	I0116 02:57:53.066277  536361 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 02:57:53.066281  536361 command_runner.go:130] > # selinux = false
	I0116 02:57:53.066287  536361 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 02:57:53.066295  536361 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 02:57:53.066303  536361 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 02:57:53.066310  536361 command_runner.go:130] > # seccomp_profile = ""
	I0116 02:57:53.066315  536361 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 02:57:53.066323  536361 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 02:57:53.066331  536361 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 02:57:53.066336  536361 command_runner.go:130] > # which might increase security.
	I0116 02:57:53.066342  536361 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0116 02:57:53.066348  536361 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 02:57:53.066357  536361 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 02:57:53.066365  536361 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 02:57:53.066376  536361 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 02:57:53.066387  536361 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:57:53.066393  536361 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 02:57:53.066399  536361 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 02:57:53.066405  536361 command_runner.go:130] > # the cgroup blockio controller.
	I0116 02:57:53.066410  536361 command_runner.go:130] > # blockio_config_file = ""
	I0116 02:57:53.066418  536361 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 02:57:53.066422  536361 command_runner.go:130] > # irqbalance daemon.
	I0116 02:57:53.066429  536361 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 02:57:53.066446  536361 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 02:57:53.066453  536361 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:57:53.066458  536361 command_runner.go:130] > # rdt_config_file = ""
	I0116 02:57:53.066465  536361 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 02:57:53.066472  536361 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 02:57:53.066478  536361 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 02:57:53.066484  536361 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 02:57:53.066490  536361 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 02:57:53.066498  536361 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 02:57:53.066506  536361 command_runner.go:130] > # will be added.
	I0116 02:57:53.066511  536361 command_runner.go:130] > # default_capabilities = [
	I0116 02:57:53.066517  536361 command_runner.go:130] > # 	"CHOWN",
	I0116 02:57:53.066521  536361 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 02:57:53.066526  536361 command_runner.go:130] > # 	"FSETID",
	I0116 02:57:53.066530  536361 command_runner.go:130] > # 	"FOWNER",
	I0116 02:57:53.066536  536361 command_runner.go:130] > # 	"SETGID",
	I0116 02:57:53.066541  536361 command_runner.go:130] > # 	"SETUID",
	I0116 02:57:53.066547  536361 command_runner.go:130] > # 	"SETPCAP",
	I0116 02:57:53.066551  536361 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 02:57:53.066557  536361 command_runner.go:130] > # 	"KILL",
	I0116 02:57:53.066560  536361 command_runner.go:130] > # ]
	I0116 02:57:53.066569  536361 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0116 02:57:53.066578  536361 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0116 02:57:53.066584  536361 command_runner.go:130] > # add_inheritable_capabilities = true
	I0116 02:57:53.066590  536361 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 02:57:53.066602  536361 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:57:53.066608  536361 command_runner.go:130] > # default_sysctls = [
	I0116 02:57:53.066615  536361 command_runner.go:130] > # ]
	I0116 02:57:53.066622  536361 command_runner.go:130] > # List of devices on the host that a
	I0116 02:57:53.066628  536361 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 02:57:53.066634  536361 command_runner.go:130] > # allowed_devices = [
	I0116 02:57:53.066637  536361 command_runner.go:130] > # 	"/dev/fuse",
	I0116 02:57:53.066643  536361 command_runner.go:130] > # ]
	I0116 02:57:53.066648  536361 command_runner.go:130] > # List of additional devices. specified as
	I0116 02:57:53.066684  536361 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 02:57:53.066692  536361 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 02:57:53.066698  536361 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:57:53.066702  536361 command_runner.go:130] > # additional_devices = [
	I0116 02:57:53.066706  536361 command_runner.go:130] > # ]
	I0116 02:57:53.066713  536361 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 02:57:53.066717  536361 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 02:57:53.066723  536361 command_runner.go:130] > # 	"/etc/cdi",
	I0116 02:57:53.066727  536361 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 02:57:53.066733  536361 command_runner.go:130] > # ]
	I0116 02:57:53.066739  536361 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 02:57:53.066749  536361 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 02:57:53.066756  536361 command_runner.go:130] > # Defaults to false.
	I0116 02:57:53.066761  536361 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 02:57:53.066769  536361 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 02:57:53.066778  536361 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 02:57:53.066784  536361 command_runner.go:130] > # hooks_dir = [
	I0116 02:57:53.066789  536361 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 02:57:53.066794  536361 command_runner.go:130] > # ]
	I0116 02:57:53.066800  536361 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 02:57:53.066809  536361 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 02:57:53.066816  536361 command_runner.go:130] > # its default mounts from the following two files:
	I0116 02:57:53.066821  536361 command_runner.go:130] > #
	I0116 02:57:53.066828  536361 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 02:57:53.066836  536361 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 02:57:53.066844  536361 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 02:57:53.066850  536361 command_runner.go:130] > #
	I0116 02:57:53.066856  536361 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 02:57:53.066865  536361 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 02:57:53.066876  536361 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 02:57:53.066885  536361 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 02:57:53.066891  536361 command_runner.go:130] > #
	I0116 02:57:53.066895  536361 command_runner.go:130] > # default_mounts_file = ""
	I0116 02:57:53.066902  536361 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 02:57:53.066909  536361 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 02:57:53.066915  536361 command_runner.go:130] > # pids_limit = 0
	I0116 02:57:53.066921  536361 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 02:57:53.066929  536361 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 02:57:53.066937  536361 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 02:57:53.066947  536361 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 02:57:53.066954  536361 command_runner.go:130] > # log_size_max = -1
	I0116 02:57:53.066960  536361 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 02:57:53.066967  536361 command_runner.go:130] > # log_to_journald = false
	I0116 02:57:53.066973  536361 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 02:57:53.066980  536361 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 02:57:53.066986  536361 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 02:57:53.066993  536361 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 02:57:53.067003  536361 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 02:57:53.067011  536361 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 02:57:53.067016  536361 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 02:57:53.067023  536361 command_runner.go:130] > # read_only = false
	I0116 02:57:53.067029  536361 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 02:57:53.067037  536361 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 02:57:53.067043  536361 command_runner.go:130] > # live configuration reload.
	I0116 02:57:53.067047  536361 command_runner.go:130] > # log_level = "info"
	I0116 02:57:53.067055  536361 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 02:57:53.067064  536361 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:57:53.067070  536361 command_runner.go:130] > # log_filter = ""
	I0116 02:57:53.067076  536361 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 02:57:53.067084  536361 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 02:57:53.067091  536361 command_runner.go:130] > # separated by comma.
	I0116 02:57:53.067095  536361 command_runner.go:130] > # uid_mappings = ""
	I0116 02:57:53.067103  536361 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 02:57:53.067112  536361 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 02:57:53.067118  536361 command_runner.go:130] > # separated by comma.
	I0116 02:57:53.067125  536361 command_runner.go:130] > # gid_mappings = ""
	I0116 02:57:53.067134  536361 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 02:57:53.067142  536361 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:57:53.067150  536361 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:57:53.067156  536361 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 02:57:53.067162  536361 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 02:57:53.067170  536361 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:57:53.067179  536361 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:57:53.067185  536361 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 02:57:53.067192  536361 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 02:57:53.067200  536361 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 02:57:53.067208  536361 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 02:57:53.067214  536361 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 02:57:53.067219  536361 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 02:57:53.067229  536361 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 02:57:53.067236  536361 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 02:57:53.067241  536361 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 02:57:53.067247  536361 command_runner.go:130] > # drop_infra_ctr = true
	I0116 02:57:53.067255  536361 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 02:57:53.067262  536361 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 02:57:53.067272  536361 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 02:57:53.067280  536361 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 02:57:53.067288  536361 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 02:57:53.067295  536361 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 02:57:53.067300  536361 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 02:57:53.067308  536361 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 02:57:53.067314  536361 command_runner.go:130] > # pinns_path = ""
	I0116 02:57:53.067320  536361 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 02:57:53.067329  536361 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 02:57:53.067338  536361 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 02:57:53.067344  536361 command_runner.go:130] > # default_runtime = "runc"
	I0116 02:57:53.067349  536361 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 02:57:53.067359  536361 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 02:57:53.067370  536361 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 02:57:53.067377  536361 command_runner.go:130] > # creation as a file is not desired either.
	I0116 02:57:53.067385  536361 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 02:57:53.067395  536361 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 02:57:53.067403  536361 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 02:57:53.067406  536361 command_runner.go:130] > # ]
	I0116 02:57:53.067412  536361 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 02:57:53.067420  536361 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 02:57:53.067428  536361 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 02:57:53.067440  536361 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 02:57:53.067446  536361 command_runner.go:130] > #
	I0116 02:57:53.067451  536361 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 02:57:53.067458  536361 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 02:57:53.067462  536361 command_runner.go:130] > #  runtime_type = "oci"
	I0116 02:57:53.067468  536361 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 02:57:53.067473  536361 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 02:57:53.067479  536361 command_runner.go:130] > #  allowed_annotations = []
	I0116 02:57:53.067483  536361 command_runner.go:130] > # Where:
	I0116 02:57:53.067491  536361 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 02:57:53.067497  536361 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 02:57:53.067505  536361 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 02:57:53.067517  536361 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 02:57:53.067523  536361 command_runner.go:130] > #   in $PATH.
	I0116 02:57:53.067529  536361 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 02:57:53.067538  536361 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 02:57:53.067545  536361 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 02:57:53.067551  536361 command_runner.go:130] > #   state.
	I0116 02:57:53.067557  536361 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 02:57:53.067565  536361 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 02:57:53.067571  536361 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 02:57:53.067579  536361 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 02:57:53.067585  536361 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 02:57:53.067593  536361 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 02:57:53.067600  536361 command_runner.go:130] > #   The currently recognized values are:
	I0116 02:57:53.067607  536361 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 02:57:53.067616  536361 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 02:57:53.067626  536361 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 02:57:53.067634  536361 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 02:57:53.067643  536361 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 02:57:53.067671  536361 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 02:57:53.067680  536361 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 02:57:53.067687  536361 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 02:57:53.067694  536361 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 02:57:53.067698  536361 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 02:57:53.067706  536361 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0116 02:57:53.067713  536361 command_runner.go:130] > runtime_type = "oci"
	I0116 02:57:53.067717  536361 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 02:57:53.067723  536361 command_runner.go:130] > runtime_config_path = ""
	I0116 02:57:53.067728  536361 command_runner.go:130] > monitor_path = ""
	I0116 02:57:53.067733  536361 command_runner.go:130] > monitor_cgroup = ""
	I0116 02:57:53.067738  536361 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 02:57:53.067790  536361 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 02:57:53.067797  536361 command_runner.go:130] > # running containers
	I0116 02:57:53.067802  536361 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 02:57:53.067807  536361 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 02:57:53.067814  536361 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 02:57:53.067822  536361 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 02:57:53.067831  536361 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 02:57:53.067838  536361 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 02:57:53.067843  536361 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 02:57:53.067852  536361 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 02:57:53.067859  536361 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 02:57:53.067866  536361 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 02:57:53.067872  536361 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 02:57:53.067879  536361 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 02:57:53.067888  536361 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 02:57:53.067900  536361 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 02:57:53.067910  536361 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 02:57:53.067917  536361 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 02:57:53.067927  536361 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 02:57:53.067937  536361 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 02:57:53.067945  536361 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 02:57:53.067955  536361 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 02:57:53.067961  536361 command_runner.go:130] > # Example:
	I0116 02:57:53.067965  536361 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 02:57:53.067975  536361 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 02:57:53.067982  536361 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 02:57:53.067987  536361 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 02:57:53.067992  536361 command_runner.go:130] > # cpuset = 0
	I0116 02:57:53.067997  536361 command_runner.go:130] > # cpushares = "0-1"
	I0116 02:57:53.068002  536361 command_runner.go:130] > # Where:
	I0116 02:57:53.068007  536361 command_runner.go:130] > # The workload name is workload-type.
	I0116 02:57:53.068016  536361 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 02:57:53.068024  536361 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 02:57:53.068031  536361 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 02:57:53.068041  536361 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 02:57:53.068049  536361 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 02:57:53.068054  536361 command_runner.go:130] > # 
	I0116 02:57:53.068061  536361 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 02:57:53.068067  536361 command_runner.go:130] > #
	I0116 02:57:53.068072  536361 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 02:57:53.068080  536361 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 02:57:53.068089  536361 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 02:57:53.068099  536361 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 02:57:53.068107  536361 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 02:57:53.068112  536361 command_runner.go:130] > [crio.image]
	I0116 02:57:53.068118  536361 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 02:57:53.068127  536361 command_runner.go:130] > # default_transport = "docker://"
	I0116 02:57:53.068135  536361 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 02:57:53.068143  536361 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:57:53.068150  536361 command_runner.go:130] > # global_auth_file = ""
	I0116 02:57:53.068158  536361 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 02:57:53.068163  536361 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:57:53.068170  536361 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 02:57:53.068176  536361 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 02:57:53.068184  536361 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:57:53.068191  536361 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:57:53.068198  536361 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 02:57:53.068204  536361 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 02:57:53.068212  536361 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 02:57:53.068221  536361 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 02:57:53.068233  536361 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 02:57:53.068240  536361 command_runner.go:130] > # pause_command = "/pause"
	I0116 02:57:53.068246  536361 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 02:57:53.068268  536361 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 02:57:53.068280  536361 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 02:57:53.068297  536361 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 02:57:53.068307  536361 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 02:57:53.068313  536361 command_runner.go:130] > # signature_policy = ""
	I0116 02:57:53.068321  536361 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 02:57:53.068330  536361 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 02:57:53.068336  536361 command_runner.go:130] > # changing them here.
	I0116 02:57:53.068341  536361 command_runner.go:130] > # insecure_registries = [
	I0116 02:57:53.068346  536361 command_runner.go:130] > # ]
	I0116 02:57:53.068353  536361 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 02:57:53.068360  536361 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 02:57:53.068367  536361 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 02:57:53.068372  536361 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 02:57:53.068379  536361 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 02:57:53.068388  536361 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 02:57:53.068395  536361 command_runner.go:130] > # CNI plugins.
	I0116 02:57:53.068399  536361 command_runner.go:130] > [crio.network]
	I0116 02:57:53.068407  536361 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 02:57:53.068414  536361 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 02:57:53.068422  536361 command_runner.go:130] > # cni_default_network = ""
	I0116 02:57:53.068430  536361 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 02:57:53.068441  536361 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 02:57:53.068449  536361 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 02:57:53.068455  536361 command_runner.go:130] > # plugin_dirs = [
	I0116 02:57:53.068459  536361 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 02:57:53.068464  536361 command_runner.go:130] > # ]
	I0116 02:57:53.068470  536361 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 02:57:53.068476  536361 command_runner.go:130] > [crio.metrics]
	I0116 02:57:53.068481  536361 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 02:57:53.068487  536361 command_runner.go:130] > # enable_metrics = false
	I0116 02:57:53.068492  536361 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 02:57:53.068499  536361 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 02:57:53.068508  536361 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 02:57:53.068517  536361 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 02:57:53.068525  536361 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 02:57:53.068529  536361 command_runner.go:130] > # metrics_collectors = [
	I0116 02:57:53.068535  536361 command_runner.go:130] > # 	"operations",
	I0116 02:57:53.068540  536361 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 02:57:53.068546  536361 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 02:57:53.068550  536361 command_runner.go:130] > # 	"operations_errors",
	I0116 02:57:53.068557  536361 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 02:57:53.068561  536361 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 02:57:53.068568  536361 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 02:57:53.068572  536361 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 02:57:53.068578  536361 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 02:57:53.068583  536361 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 02:57:53.068589  536361 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 02:57:53.068593  536361 command_runner.go:130] > # 	"containers_oom_total",
	I0116 02:57:53.068600  536361 command_runner.go:130] > # 	"containers_oom",
	I0116 02:57:53.068604  536361 command_runner.go:130] > # 	"processes_defunct",
	I0116 02:57:53.068612  536361 command_runner.go:130] > # 	"operations_total",
	I0116 02:57:53.068619  536361 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 02:57:53.068624  536361 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 02:57:53.068630  536361 command_runner.go:130] > # 	"operations_errors_total",
	I0116 02:57:53.068634  536361 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 02:57:53.068641  536361 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 02:57:53.068645  536361 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 02:57:53.068652  536361 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 02:57:53.068656  536361 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 02:57:53.068662  536361 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 02:57:53.068666  536361 command_runner.go:130] > # ]
	I0116 02:57:53.068673  536361 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 02:57:53.068678  536361 command_runner.go:130] > # metrics_port = 9090
	I0116 02:57:53.068685  536361 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 02:57:53.068691  536361 command_runner.go:130] > # metrics_socket = ""
	I0116 02:57:53.068696  536361 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 02:57:53.068704  536361 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 02:57:53.068713  536361 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 02:57:53.068722  536361 command_runner.go:130] > # certificate on any modification event.
	I0116 02:57:53.068728  536361 command_runner.go:130] > # metrics_cert = ""
	I0116 02:57:53.068734  536361 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 02:57:53.068741  536361 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 02:57:53.068745  536361 command_runner.go:130] > # metrics_key = ""
	I0116 02:57:53.068755  536361 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 02:57:53.068760  536361 command_runner.go:130] > [crio.tracing]
	I0116 02:57:53.068766  536361 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 02:57:53.068773  536361 command_runner.go:130] > # enable_tracing = false
	I0116 02:57:53.068778  536361 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 02:57:53.068785  536361 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 02:57:53.068790  536361 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 02:57:53.068797  536361 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 02:57:53.068803  536361 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 02:57:53.068808  536361 command_runner.go:130] > [crio.stats]
	I0116 02:57:53.068814  536361 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 02:57:53.068821  536361 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 02:57:53.068828  536361 command_runner.go:130] > # stats_collection_period = 0
	I0116 02:57:53.068903  536361 cni.go:84] Creating CNI manager for ""
	I0116 02:57:53.068912  536361 cni.go:136] 2 nodes found, recommending kindnet
	I0116 02:57:53.068922  536361 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:57:53.068942  536361 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-061156 NodeName:multinode-061156-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:57:53.069061  536361 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-061156-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:57:53.069113  536361 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-061156-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-061156 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:57:53.069158  536361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:57:53.076846  536361 command_runner.go:130] > kubeadm
	I0116 02:57:53.076862  536361 command_runner.go:130] > kubectl
	I0116 02:57:53.076866  536361 command_runner.go:130] > kubelet
	I0116 02:57:53.077513  536361 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:57:53.077588  536361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 02:57:53.085392  536361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 02:57:53.100907  536361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:57:53.116549  536361 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0116 02:57:53.119471  536361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:57:53.128651  536361 host.go:66] Checking if "multinode-061156" exists ...
	I0116 02:57:53.128882  536361 config.go:182] Loaded profile config "multinode-061156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:57:53.128899  536361 start.go:304] JoinCluster: &{Name:multinode-061156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-061156 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:57:53.129019  536361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 02:57:53.129074  536361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:57:53.144761  536361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa Username:docker}
	I0116 02:57:53.294035  536361 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token caknu1.ml44v5lcmcqoyhv4 --discovery-token-ca-cert-hash sha256:8cf2f52e6e786139868a71d0da6c4e60f90166b48a1f8c1755e09d650797d85a 
	I0116 02:57:53.294108  536361 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:57:53.294150  536361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token caknu1.ml44v5lcmcqoyhv4 --discovery-token-ca-cert-hash sha256:8cf2f52e6e786139868a71d0da6c4e60f90166b48a1f8c1755e09d650797d85a --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-061156-m02"
	I0116 02:57:53.327927  536361 command_runner.go:130] ! W0116 02:57:53.327505    1108 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0116 02:57:53.355896  536361 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-gcp\n", err: exit status 1
	I0116 02:57:53.422956  536361 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:57:55.556801  536361 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 02:57:55.556829  536361 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0116 02:57:55.556837  536361 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1048-gcp
	I0116 02:57:55.556842  536361 command_runner.go:130] > OS: Linux
	I0116 02:57:55.556847  536361 command_runner.go:130] > CGROUPS_CPU: enabled
	I0116 02:57:55.556865  536361 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0116 02:57:55.556871  536361 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0116 02:57:55.556876  536361 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0116 02:57:55.556881  536361 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0116 02:57:55.556886  536361 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0116 02:57:55.556901  536361 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0116 02:57:55.556909  536361 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0116 02:57:55.556916  536361 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0116 02:57:55.556922  536361 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 02:57:55.556931  536361 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 02:57:55.556939  536361 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:57:55.556951  536361 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:57:55.556956  536361 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 02:57:55.556967  536361 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 02:57:55.556974  536361 command_runner.go:130] > This node has joined the cluster:
	I0116 02:57:55.556980  536361 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 02:57:55.556988  536361 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 02:57:55.556998  536361 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 02:57:55.557017  536361 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token caknu1.ml44v5lcmcqoyhv4 --discovery-token-ca-cert-hash sha256:8cf2f52e6e786139868a71d0da6c4e60f90166b48a1f8c1755e09d650797d85a --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-061156-m02": (2.26285433s)
	I0116 02:57:55.557042  536361 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 02:57:55.639283  536361 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0116 02:57:55.718566  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=multinode-061156 minikube.k8s.io/updated_at=2024_01_16T02_57_55_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:55.788151  536361 command_runner.go:130] > node/multinode-061156-m02 labeled
	I0116 02:57:55.790993  536361 start.go:306] JoinCluster complete in 2.662091786s
	I0116 02:57:55.791018  536361 cni.go:84] Creating CNI manager for ""
	I0116 02:57:55.791024  536361 cni.go:136] 2 nodes found, recommending kindnet
	I0116 02:57:55.791069  536361 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:57:55.794647  536361 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 02:57:55.794679  536361 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0116 02:57:55.794696  536361 command_runner.go:130] > Device: 37h/55d	Inode: 1047659     Links: 1
	I0116 02:57:55.794703  536361 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:57:55.794711  536361 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0116 02:57:55.794719  536361 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0116 02:57:55.794724  536361 command_runner.go:130] > Change: 2024-01-16 02:37:08.170599703 +0000
	I0116 02:57:55.794731  536361 command_runner.go:130] >  Birth: 2024-01-16 02:37:08.142597611 +0000
	I0116 02:57:55.794818  536361 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:57:55.794829  536361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:57:55.811283  536361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:57:56.009065  536361 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:57:56.012385  536361 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:57:56.014857  536361 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 02:57:56.026967  536361 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 02:57:56.031609  536361 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:57:56.031865  536361 kapi.go:59] client config for multinode-061156: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.key", CAFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:57:56.032247  536361 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:57:56.032278  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:56.032290  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:56.032303  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:56.034340  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:56.034364  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:56.034374  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:56.034383  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:56.034392  536361 round_trippers.go:580]     Content-Length: 291
	I0116 02:57:56.034405  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:56 GMT
	I0116 02:57:56.034425  536361 round_trippers.go:580]     Audit-Id: 816b79dd-b678-41cd-8ecf-1fdeeeed0b3c
	I0116 02:57:56.034438  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:56.034449  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:56.034478  536361 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0402feb2-a751-4ef6-b708-443a517c68b1","resourceVersion":"428","creationTimestamp":"2024-01-16T02:56:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 02:57:56.034601  536361 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-061156" context rescaled to 1 replicas
	I0116 02:57:56.034637  536361 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:57:56.038322  536361 out.go:177] * Verifying Kubernetes components...
	I0116 02:57:56.039844  536361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:57:56.051520  536361 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:57:56.051775  536361 kapi.go:59] client config for multinode-061156: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/profiles/multinode-061156/client.key", CAFile:"/home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:57:56.052031  536361 node_ready.go:35] waiting up to 6m0s for node "multinode-061156-m02" to be "Ready" ...
	I0116 02:57:56.052095  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:57:56.052103  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:56.052111  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:56.052117  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:56.054272  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:56.054301  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:56.054311  536361 round_trippers.go:580]     Audit-Id: 36943d13-f71f-47c4-a0f3-071294581c28
	I0116 02:57:56.054321  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:56.054329  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:56.054338  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:56.054345  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:56.054361  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:56 GMT
	I0116 02:57:56.054524  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"467","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I0116 02:57:56.553129  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:57:56.553156  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:56.553165  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:56.553171  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:56.555442  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:56.555466  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:56.555476  536361 round_trippers.go:580]     Audit-Id: ea06af01-44dc-40a5-9e91-23f12f1813b3
	I0116 02:57:56.555484  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:56.555492  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:56.555508  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:56.555525  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:56.555532  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:56 GMT
	I0116 02:57:56.555665  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:57:57.052384  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:57:57.052409  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:57.052417  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:57.052423  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:57.054741  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:57.054765  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:57.054776  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:57.054785  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:57.054793  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:57.054802  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:57 GMT
	I0116 02:57:57.054813  536361 round_trippers.go:580]     Audit-Id: a58f5bb8-c5c6-4430-9b33-d21ace6eda1d
	I0116 02:57:57.054821  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:57.054934  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:57:57.552731  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:57:57.552751  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:57.552760  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:57.552766  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:57.554801  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:57.554828  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:57.554840  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:57.554849  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:57 GMT
	I0116 02:57:57.554857  536361 round_trippers.go:580]     Audit-Id: e3558b8d-b49c-4454-a523-0742f5fcbcc6
	I0116 02:57:57.554876  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:57.554889  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:57.554938  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:57.555096  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:57:58.052418  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:57:58.052444  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:58.052456  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:58.052464  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:58.054860  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:58.054882  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:58.054889  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:58.054895  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:58.054900  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:58.054905  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:58.054910  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:58 GMT
	I0116 02:57:58.054915  536361 round_trippers.go:580]     Audit-Id: cab71265-9de0-48e6-9727-ac83a652c39e
	I0116 02:57:58.055028  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:57:58.055328  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:57:58.552473  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:57:58.552497  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:58.552505  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:58.552511  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:58.554714  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:58.554733  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:58.554741  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:58 GMT
	I0116 02:57:58.554747  536361 round_trippers.go:580]     Audit-Id: 0c7a5715-bd3b-410d-89ff-65c331e8cbf6
	I0116 02:57:58.554752  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:58.554757  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:58.554762  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:58.554774  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:58.554917  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:57:59.052506  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:57:59.052532  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:59.052548  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:59.052555  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:59.054835  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:59.054861  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:59.054871  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:59.054880  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:59 GMT
	I0116 02:57:59.054888  536361 round_trippers.go:580]     Audit-Id: e02edb23-2a56-4dad-a83d-3856ea4d0b85
	I0116 02:57:59.054895  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:59.054907  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:59.054915  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:59.055038  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:57:59.552486  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:57:59.552527  536361 round_trippers.go:469] Request Headers:
	I0116 02:57:59.552538  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:59.552545  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:59.555058  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:59.555081  536361 round_trippers.go:577] Response Headers:
	I0116 02:57:59.555096  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:59.555104  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:57:59.555112  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:57:59.555119  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:59 GMT
	I0116 02:57:59.555124  536361 round_trippers.go:580]     Audit-Id: f8680850-920f-4ff4-a996-c78fab043f6d
	I0116 02:57:59.555129  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:59.555317  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:00.052996  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:00.053027  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:00.053035  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:00.053041  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:00.055306  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:00.055333  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:00.055345  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:00.055352  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:00.055360  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:00 GMT
	I0116 02:58:00.055369  536361 round_trippers.go:580]     Audit-Id: 68dd46c0-24d5-4b81-8e59-f496def9bd2a
	I0116 02:58:00.055378  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:00.055386  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:00.055549  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:00.055944  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:00.553169  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:00.553192  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:00.553200  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:00.553207  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:00.555897  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:00.555923  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:00.555935  536361 round_trippers.go:580]     Audit-Id: 74f5f943-9316-4b33-a946-7ee17fe4f030
	I0116 02:58:00.555944  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:00.555953  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:00.555962  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:00.555974  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:00.555980  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:00 GMT
	I0116 02:58:00.556116  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:01.052432  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:01.052460  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:01.052469  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:01.052475  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:01.054807  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:01.054827  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:01.054834  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:01.054840  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:01.054845  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:01 GMT
	I0116 02:58:01.054850  536361 round_trippers.go:580]     Audit-Id: 8a19f0d6-f3c1-448e-ab4d-834deb8cd248
	I0116 02:58:01.054863  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:01.054873  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:01.055057  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:01.552454  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:01.552494  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:01.552507  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:01.552515  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:01.555118  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:01.555146  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:01.555161  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:01.555169  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:01 GMT
	I0116 02:58:01.555179  536361 round_trippers.go:580]     Audit-Id: 3038448a-93ab-4568-8a23-0cf6781d8f81
	I0116 02:58:01.555187  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:01.555194  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:01.555205  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:01.555341  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:02.052452  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:02.052478  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:02.052487  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:02.052494  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:02.054900  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:02.054921  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:02.054928  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:02.054934  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:02 GMT
	I0116 02:58:02.054939  536361 round_trippers.go:580]     Audit-Id: 2d117315-f1fe-495f-9845-183689725114
	I0116 02:58:02.054944  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:02.054949  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:02.054954  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:02.055137  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:02.552524  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:02.552560  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:02.552569  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:02.552575  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:02.554964  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:02.554987  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:02.554997  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:02.555005  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:02.555013  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:02.555020  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:02 GMT
	I0116 02:58:02.555042  536361 round_trippers.go:580]     Audit-Id: e3fbfef7-bdb6-48c1-8f0f-afc3867cad50
	I0116 02:58:02.555055  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:02.555194  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:02.555494  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:03.052450  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:03.052484  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:03.052493  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:03.052499  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:03.054813  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:03.054835  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:03.054845  536361 round_trippers.go:580]     Audit-Id: bd11d594-a159-4921-89ef-7577e85c87e9
	I0116 02:58:03.054852  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:03.054860  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:03.054869  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:03.054877  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:03.054890  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:03 GMT
	I0116 02:58:03.055108  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:03.552671  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:03.552700  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:03.552712  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:03.552720  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:03.554960  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:03.554983  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:03.554993  536361 round_trippers.go:580]     Audit-Id: afaf4041-5e98-4d59-b037-730233991c7e
	I0116 02:58:03.555001  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:03.555013  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:03.555021  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:03.555036  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:03.555044  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:03 GMT
	I0116 02:58:03.555218  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:04.052859  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:04.052886  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:04.052907  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:04.052914  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:04.055062  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:04.055091  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:04.055102  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:04.055112  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:04.055118  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:04.055124  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:04.055129  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:04 GMT
	I0116 02:58:04.055137  536361 round_trippers.go:580]     Audit-Id: 2660732f-e4f8-4858-bbd8-d472b6a1cdfe
	I0116 02:58:04.055307  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:04.553066  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:04.553096  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:04.553107  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:04.553116  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:04.555554  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:04.555586  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:04.555597  536361 round_trippers.go:580]     Audit-Id: e141e9de-9330-4fab-955f-ace900f999dc
	I0116 02:58:04.555605  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:04.555611  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:04.555619  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:04.555624  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:04.555629  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:04 GMT
	I0116 02:58:04.555778  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:04.556200  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:05.052269  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:05.052302  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:05.052313  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:05.052321  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:05.054691  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:05.055251  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:05.055287  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:05 GMT
	I0116 02:58:05.055298  536361 round_trippers.go:580]     Audit-Id: 0076b663-293e-4faa-b22c-e84e06a069c5
	I0116 02:58:05.055310  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:05.055324  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:05.055334  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:05.055344  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:05.055542  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"469","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0116 02:58:05.552293  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:05.552327  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:05.552335  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:05.552341  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:05.554610  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:05.554640  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:05.554649  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:05.554657  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:05.554665  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:05 GMT
	I0116 02:58:05.554674  536361 round_trippers.go:580]     Audit-Id: edfb5023-82fc-4fe7-8625-53736ac18b19
	I0116 02:58:05.554697  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:05.554704  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:05.554860  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:06.052340  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:06.052366  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:06.052375  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:06.052382  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:06.054847  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:06.054876  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:06.054888  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:06.054923  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:06.054935  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:06.054947  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:06.054959  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:06 GMT
	I0116 02:58:06.054970  536361 round_trippers.go:580]     Audit-Id: a2c01a43-9b6c-455c-b6b0-59c8d10eb91f
	I0116 02:58:06.055142  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:06.552492  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:06.552522  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:06.552531  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:06.552537  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:06.554889  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:06.554915  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:06.554925  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:06.554933  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:06.554940  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:06 GMT
	I0116 02:58:06.554948  536361 round_trippers.go:580]     Audit-Id: d6927054-dac8-4e99-9b84-7bdc8fb9beca
	I0116 02:58:06.554958  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:06.554967  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:06.555096  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:07.052887  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:07.052927  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:07.052944  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:07.052953  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:07.055242  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:07.055266  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:07.055274  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:07.055280  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:07.055285  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:07.055290  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:07.055296  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:07 GMT
	I0116 02:58:07.055301  536361 round_trippers.go:580]     Audit-Id: 61ea4aa0-bc3e-4ef3-86ba-05a76ba40a4a
	I0116 02:58:07.055493  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:07.055839  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:07.552450  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:07.552473  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:07.552482  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:07.552488  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:07.555018  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:07.555039  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:07.555046  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:07.555052  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:07 GMT
	I0116 02:58:07.555057  536361 round_trippers.go:580]     Audit-Id: d8efdfa5-1134-4edf-908f-c7df768fcd9d
	I0116 02:58:07.555062  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:07.555068  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:07.555073  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:07.555246  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:08.052991  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:08.053015  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:08.053024  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:08.053030  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:08.055403  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:08.055423  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:08.055431  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:08.055437  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:08 GMT
	I0116 02:58:08.055442  536361 round_trippers.go:580]     Audit-Id: e3df65b5-df9e-48d5-b8fc-5c308805c12b
	I0116 02:58:08.055449  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:08.055460  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:08.055471  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:08.055631  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:08.552849  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:08.552879  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:08.552902  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:08.552914  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:08.554979  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:08.555004  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:08.555013  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:08 GMT
	I0116 02:58:08.555022  536361 round_trippers.go:580]     Audit-Id: 8101455c-55ec-40d4-9ff0-ee9c81686bc6
	I0116 02:58:08.555030  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:08.555038  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:08.555046  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:08.555054  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:08.555205  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:09.052845  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:09.052874  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:09.052882  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:09.052901  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:09.055215  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:09.055241  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:09.055252  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:09.055261  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:09.055267  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:09.055273  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:09 GMT
	I0116 02:58:09.055278  536361 round_trippers.go:580]     Audit-Id: 55201c26-cfcf-46d5-ae76-9dc757c72aeb
	I0116 02:58:09.055285  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:09.055411  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:09.553163  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:09.553192  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:09.553209  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:09.553220  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:09.555614  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:09.555641  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:09.555652  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:09 GMT
	I0116 02:58:09.555661  536361 round_trippers.go:580]     Audit-Id: 729330ba-3e68-4775-8079-456f0cd4f7e7
	I0116 02:58:09.555668  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:09.555677  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:09.555692  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:09.555702  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:09.555859  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:09.556195  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:10.052389  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:10.052413  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:10.052422  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:10.052429  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:10.054770  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:10.054791  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:10.054798  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:10 GMT
	I0116 02:58:10.054805  536361 round_trippers.go:580]     Audit-Id: 5666b2a8-94d7-4a4a-b3ca-09259a1cae37
	I0116 02:58:10.054810  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:10.054815  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:10.054820  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:10.054825  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:10.055016  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:10.552457  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:10.552482  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:10.552491  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:10.552496  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:10.555007  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:10.555029  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:10.555045  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:10.555057  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:10.555070  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:10.555078  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:10.555084  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:10 GMT
	I0116 02:58:10.555089  536361 round_trippers.go:580]     Audit-Id: 58454c8f-e2d6-496c-8c63-54fc41898282
	I0116 02:58:10.555215  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:11.052863  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:11.052899  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:11.052909  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.052915  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.055317  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:11.055339  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:11.055350  536361 round_trippers.go:580]     Audit-Id: 49f36bf7-7692-44ba-984e-02c3350b2760
	I0116 02:58:11.055360  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.055369  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.055377  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:11.055384  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:11.055392  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.055552  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:11.553212  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:11.553244  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:11.553254  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.553260  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.555582  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:11.555601  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:11.555613  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.555618  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:11.555623  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:11.555629  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.555634  536361 round_trippers.go:580]     Audit-Id: 738a43c9-7dd9-4c97-aacd-28f92fa8d857
	I0116 02:58:11.555639  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.555771  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:12.052760  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:12.052783  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:12.052794  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:12.052802  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:12.055251  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:12.055275  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:12.055285  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:12.055293  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:12.055301  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:12.055309  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:12 GMT
	I0116 02:58:12.055316  536361 round_trippers.go:580]     Audit-Id: c7c41a54-864e-468b-ab7a-5c3a6774b05a
	I0116 02:58:12.055328  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:12.055523  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:12.055878  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:12.553116  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:12.553138  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:12.553147  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:12.553156  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:12.555435  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:12.555455  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:12.555462  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:12.555467  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:12.555472  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:12 GMT
	I0116 02:58:12.555477  536361 round_trippers.go:580]     Audit-Id: affc8637-3621-4659-89f9-084e8d87b383
	I0116 02:58:12.555482  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:12.555489  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:12.555665  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:13.052314  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:13.052345  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:13.052357  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:13.052382  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:13.054669  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:13.054690  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:13.054701  536361 round_trippers.go:580]     Audit-Id: 28684293-f137-4757-80ca-92addecc85e8
	I0116 02:58:13.054709  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:13.054717  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:13.054724  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:13.054731  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:13.054739  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:13 GMT
	I0116 02:58:13.054887  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:13.552488  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:13.552512  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:13.552521  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:13.552527  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:13.554784  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:13.554807  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:13.554815  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:13.554822  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:13 GMT
	I0116 02:58:13.554828  536361 round_trippers.go:580]     Audit-Id: 91b43587-f6f5-4bcc-9603-0619183cce2e
	I0116 02:58:13.554833  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:13.554838  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:13.554845  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:13.555126  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:14.052494  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:14.052523  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:14.052532  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:14.052539  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:14.054726  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:14.054748  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:14.054756  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:14.054761  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:14.054766  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:14.054771  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:14 GMT
	I0116 02:58:14.054776  536361 round_trippers.go:580]     Audit-Id: e09ba466-99f4-4fb7-8d7d-4454af5ce772
	I0116 02:58:14.054788  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:14.054954  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:14.552472  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:14.552498  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:14.552507  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:14.552513  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:14.556312  536361 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:14.556342  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:14.556354  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:14.556362  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:14.556370  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:14.556379  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:14 GMT
	I0116 02:58:14.556390  536361 round_trippers.go:580]     Audit-Id: 86c1dde7-69f0-4cec-8767-cb08b16b6a7c
	I0116 02:58:14.556402  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:14.556625  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:14.557000  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:15.053174  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:15.053198  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:15.053209  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:15.053217  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:15.055500  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:15.055521  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:15.055527  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:15.055533  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:15.055538  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:15.055543  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:15 GMT
	I0116 02:58:15.055549  536361 round_trippers.go:580]     Audit-Id: be3a68b6-4a5a-46b5-97b5-7bb251d3519a
	I0116 02:58:15.055554  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:15.055714  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:15.552339  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:15.552366  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:15.552378  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:15.552389  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:15.554850  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:15.554878  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:15.554889  536361 round_trippers.go:580]     Audit-Id: 9f0391e0-3e8e-4e5d-be54-82a7e54b7fbe
	I0116 02:58:15.554897  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:15.554904  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:15.554912  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:15.554926  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:15.554935  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:15 GMT
	I0116 02:58:15.555095  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:16.052446  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:16.052476  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:16.052488  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:16.052501  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:16.054826  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:16.054849  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:16.054859  536361 round_trippers.go:580]     Audit-Id: 6500277a-f42e-48d3-9ae9-a8807362132a
	I0116 02:58:16.054867  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:16.054875  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:16.054885  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:16.054898  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:16.054910  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:16 GMT
	I0116 02:58:16.055048  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:16.552498  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:16.552529  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:16.552541  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:16.552551  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:16.554808  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:16.554831  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:16.554840  536361 round_trippers.go:580]     Audit-Id: be9e94ae-e22f-44eb-acde-1572e2ce45e7
	I0116 02:58:16.554846  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:16.554851  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:16.554856  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:16.554861  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:16.554866  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:16 GMT
	I0116 02:58:16.555033  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:17.052900  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:17.052925  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:17.052933  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:17.052939  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:17.055198  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:17.055220  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:17.055230  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:17 GMT
	I0116 02:58:17.055235  536361 round_trippers.go:580]     Audit-Id: 88e2e7cb-b69f-4ccc-ba52-994184839b63
	I0116 02:58:17.055241  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:17.055253  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:17.055263  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:17.055276  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:17.055448  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:17.055772  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:17.553256  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:17.553285  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:17.553295  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:17.553303  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:17.555695  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:17.555721  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:17.555731  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:17.555740  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:17 GMT
	I0116 02:58:17.555748  536361 round_trippers.go:580]     Audit-Id: 1865ac5c-c1e9-4421-a2e3-07dafb6518ce
	I0116 02:58:17.555757  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:17.555765  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:17.555776  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:17.555889  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:18.052425  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:18.052451  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:18.052462  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:18.052471  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:18.054633  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:18.054651  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:18.054659  536361 round_trippers.go:580]     Audit-Id: 268a05e5-8f30-4444-be5b-9e2e625d43cf
	I0116 02:58:18.054666  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:18.054674  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:18.054697  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:18.054710  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:18.054720  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:18 GMT
	I0116 02:58:18.054845  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:18.552311  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:18.552338  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:18.552349  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:18.552358  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:18.554663  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:18.554682  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:18.554689  536361 round_trippers.go:580]     Audit-Id: 53995e49-1f85-48ef-ba0a-d7e8d2350e05
	I0116 02:58:18.554695  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:18.554703  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:18.554708  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:18.554713  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:18.554718  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:18 GMT
	I0116 02:58:18.554878  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:19.052473  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:19.052501  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:19.052514  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:19.052536  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:19.055047  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:19.055071  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:19.055079  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:19.055088  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:19.055096  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:19 GMT
	I0116 02:58:19.055106  536361 round_trippers.go:580]     Audit-Id: 8e99ac64-98c2-464d-8ca7-abf461e839cb
	I0116 02:58:19.055115  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:19.055127  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:19.055291  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:19.553004  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:19.553034  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:19.553043  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:19.553049  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:19.555263  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:19.555283  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:19.555290  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:19.555296  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:19.555301  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:19.555306  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:19.555311  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:19 GMT
	I0116 02:58:19.555316  536361 round_trippers.go:580]     Audit-Id: 9c244e12-3f55-4dae-a7fe-50dda995079d
	I0116 02:58:19.555482  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:19.555830  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:20.053222  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:20.053246  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:20.053254  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:20.053261  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:20.055679  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:20.055706  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:20.055715  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:20 GMT
	I0116 02:58:20.055723  536361 round_trippers.go:580]     Audit-Id: 07f86dd9-c3d3-4b9d-8e31-4f95afa9308a
	I0116 02:58:20.055730  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:20.055737  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:20.055746  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:20.055755  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:20.055884  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:20.552491  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:20.552517  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:20.552525  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:20.552532  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:20.554898  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:20.554918  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:20.554925  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:20.554931  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:20 GMT
	I0116 02:58:20.554936  536361 round_trippers.go:580]     Audit-Id: 398b694a-de13-4a74-aad9-c918aad25c4e
	I0116 02:58:20.554941  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:20.554947  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:20.554952  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:20.555068  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:21.052435  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:21.052460  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:21.052469  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:21.052475  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:21.054817  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:21.054839  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:21.054846  536361 round_trippers.go:580]     Audit-Id: c497f228-08c5-4df8-9b7c-e75e4586a17a
	I0116 02:58:21.054852  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:21.054857  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:21.054862  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:21.054867  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:21.054873  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:21 GMT
	I0116 02:58:21.055013  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:21.552457  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:21.552484  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:21.552494  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:21.552503  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:21.554858  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:21.554887  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:21.554903  536361 round_trippers.go:580]     Audit-Id: e87fd8e3-dd3e-4e79-a9b6-8b5d8c76e767
	I0116 02:58:21.554909  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:21.554914  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:21.554920  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:21.554925  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:21.554930  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:21 GMT
	I0116 02:58:21.555041  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:22.052997  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:22.053023  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:22.053031  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:22.053037  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:22.055332  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:22.055354  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:22.055361  536361 round_trippers.go:580]     Audit-Id: fe28c2d8-9b5d-4661-ae50-ab94c8caadd4
	I0116 02:58:22.055368  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:22.055377  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:22.055385  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:22.055393  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:22.055401  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:22 GMT
	I0116 02:58:22.055523  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:22.055827  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:22.553209  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:22.553233  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:22.553241  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:22.553248  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:22.555594  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:22.555614  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:22.555624  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:22.555633  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:22.555641  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:22.555648  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:22 GMT
	I0116 02:58:22.555655  536361 round_trippers.go:580]     Audit-Id: 4128b714-02b9-44a3-9297-68cb55fd2c98
	I0116 02:58:22.555663  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:22.555776  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:23.052367  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:23.052390  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:23.052398  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:23.052404  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:23.054450  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:23.054469  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:23.054479  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:23.054488  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:23.054496  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:23.054503  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:23 GMT
	I0116 02:58:23.054510  536361 round_trippers.go:580]     Audit-Id: 85dd4833-dd16-4cf3-818c-0ca58732a81d
	I0116 02:58:23.054518  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:23.054626  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:23.553078  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:23.553106  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:23.553114  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:23.553121  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:23.555336  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:23.555361  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:23.555370  536361 round_trippers.go:580]     Audit-Id: 7992f361-0ced-4a5a-8650-b20d35ebca59
	I0116 02:58:23.555375  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:23.555381  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:23.555386  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:23.555391  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:23.555397  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:23 GMT
	I0116 02:58:23.555509  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:24.053128  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:24.053153  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:24.053161  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:24.053167  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:24.055427  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:24.055456  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:24.055465  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:24.055474  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:24.055482  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:24.055489  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:24.055497  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:24 GMT
	I0116 02:58:24.055505  536361 round_trippers.go:580]     Audit-Id: 67c1d37f-ebda-46ac-9f92-ab5422a29974
	I0116 02:58:24.055657  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:24.056083  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:24.552300  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:24.552322  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:24.552330  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:24.552336  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:24.554603  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:24.554625  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:24.554635  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:24.554642  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:24.554650  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:24.554657  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:24.554666  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:24 GMT
	I0116 02:58:24.554675  536361 round_trippers.go:580]     Audit-Id: 5ba46d44-b5ab-470e-b5fb-b128fe788c17
	I0116 02:58:24.554847  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:25.052436  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:25.052460  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:25.052469  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:25.052475  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:25.054700  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:25.054720  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:25.054727  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:25.054734  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:25 GMT
	I0116 02:58:25.054739  536361 round_trippers.go:580]     Audit-Id: a90f4874-b411-4198-baa2-94435d7fa859
	I0116 02:58:25.054744  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:25.054751  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:25.054756  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:25.054938  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:25.552447  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:25.552479  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:25.552491  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:25.552508  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:25.554732  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:25.554756  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:25.554768  536361 round_trippers.go:580]     Audit-Id: 1f3515ae-afe3-498e-a7d6-7b990ba68a4f
	I0116 02:58:25.554777  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:25.554785  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:25.554794  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:25.554802  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:25.554812  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:25 GMT
	I0116 02:58:25.554959  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:26.052424  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:26.052450  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:26.052462  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:26.052473  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:26.054768  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:26.054793  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:26.054803  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:26.054812  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:26.054820  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:26.054828  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:26 GMT
	I0116 02:58:26.054837  536361 round_trippers.go:580]     Audit-Id: 982da0a0-63ea-429e-8a1f-0497d6c92062
	I0116 02:58:26.054853  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:26.054975  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:26.552457  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:26.552483  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:26.552492  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:26.552499  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:26.554811  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:26.554837  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:26.554846  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:26 GMT
	I0116 02:58:26.554858  536361 round_trippers.go:580]     Audit-Id: 590af98b-d28c-4df9-aea0-65e9d870b237
	I0116 02:58:26.554866  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:26.554875  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:26.554886  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:26.554898  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:26.555011  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:26.555321  536361 node_ready.go:58] node "multinode-061156-m02" has status "Ready":"False"
	I0116 02:58:27.052845  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:27.052869  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:27.052884  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:27.052898  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:27.055047  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:27.055066  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:27.055073  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:27.055079  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:27 GMT
	I0116 02:58:27.055084  536361 round_trippers.go:580]     Audit-Id: 33c70957-0e50-40bf-b2cb-17577734d71e
	I0116 02:58:27.055091  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:27.055098  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:27.055107  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:27.055252  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:27.552972  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:27.552995  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:27.553003  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:27.553022  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:27.555149  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:27.555176  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:27.555187  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:27.555196  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:27.555206  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:27 GMT
	I0116 02:58:27.555215  536361 round_trippers.go:580]     Audit-Id: 301a3d01-30fb-40cb-8415-85f2c3d98ba3
	I0116 02:58:27.555226  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:27.555238  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:27.555417  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"489","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0116 02:58:28.052961  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:28.052986  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.052995  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.053002  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.055379  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:28.055401  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.055412  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.055422  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.055431  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.055440  536361 round_trippers.go:580]     Audit-Id: 75a27f9d-0b37-4092-944e-0f2c9e1a014f
	I0116 02:58:28.055449  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.055454  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.055576  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"514","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I0116 02:58:28.055987  536361 node_ready.go:49] node "multinode-061156-m02" has status "Ready":"True"
	I0116 02:58:28.056009  536361 node_ready.go:38] duration metric: took 32.003962563s waiting for node "multinode-061156-m02" to be "Ready" ...
	I0116 02:58:28.056022  536361 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:58:28.056102  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 02:58:28.056114  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.056125  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.056135  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.059122  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:28.059151  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.059162  536361 round_trippers.go:580]     Audit-Id: 09a2ba8c-adc5-4c7e-acf2-0b5dbb9d8731
	I0116 02:58:28.059171  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.059186  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.059197  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.059210  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.059234  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.059687  536361 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"514"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4rrfv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d6092a0e-384a-4e9a-92b1-f5a394a2eb25","resourceVersion":"423","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7a1015df-c877-493c-bb76-694615980976","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a1015df-c877-493c-bb76-694615980976\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0116 02:58:28.061787  536361 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4rrfv" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.061886  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4rrfv
	I0116 02:58:28.061897  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.061908  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.061918  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.063529  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:58:28.063548  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.063558  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.063566  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.063575  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.063585  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.063595  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.063607  536361 round_trippers.go:580]     Audit-Id: 5bee5d6b-28c1-484e-9106-e5827d0949df
	I0116 02:58:28.063719  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4rrfv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d6092a0e-384a-4e9a-92b1-f5a394a2eb25","resourceVersion":"423","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7a1015df-c877-493c-bb76-694615980976","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a1015df-c877-493c-bb76-694615980976\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0116 02:58:28.064152  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:58:28.064167  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.064177  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.064187  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.065847  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:58:28.065862  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.065869  536361 round_trippers.go:580]     Audit-Id: 3cfbaf15-1ecf-4710-9795-90d0ef88dd8d
	I0116 02:58:28.065874  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.065880  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.065886  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.065894  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.065902  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.066033  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:58:28.066393  536361 pod_ready.go:92] pod "coredns-5dd5756b68-4rrfv" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:28.066412  536361 pod_ready.go:81] duration metric: took 4.600868ms waiting for pod "coredns-5dd5756b68-4rrfv" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.066422  536361 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.066493  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-061156
	I0116 02:58:28.066503  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.066512  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.066524  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.068190  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:58:28.068205  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.068212  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.068217  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.068223  536361 round_trippers.go:580]     Audit-Id: f8a3ec2a-c946-4332-9f75-e095d706abef
	I0116 02:58:28.068228  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.068233  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.068238  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.068353  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-061156","namespace":"kube-system","uid":"e49c4a4d-ee57-4241-b505-e98608e6ddbf","resourceVersion":"278","creationTimestamp":"2024-01-16T02:56:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"81687b1150086af7b3cfc80a39f848b7","kubernetes.io/config.mirror":"81687b1150086af7b3cfc80a39f848b7","kubernetes.io/config.seen":"2024-01-16T02:56:54.246239492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:56:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0116 02:58:28.068776  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:58:28.068802  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.068814  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.068829  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.070341  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:58:28.070361  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.070371  536361 round_trippers.go:580]     Audit-Id: cafd4b8d-198e-42d0-8374-9e4c7cdb775c
	I0116 02:58:28.070380  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.070401  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.070409  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.070415  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.070421  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.070514  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:58:28.070793  536361 pod_ready.go:92] pod "etcd-multinode-061156" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:28.070809  536361 pod_ready.go:81] duration metric: took 4.377041ms waiting for pod "etcd-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.070821  536361 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.070874  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-061156
	I0116 02:58:28.070881  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.070887  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.070895  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.072351  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:58:28.072365  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.072371  536361 round_trippers.go:580]     Audit-Id: 3eeddad8-d057-47e5-ab55-9370b4e04f94
	I0116 02:58:28.072377  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.072382  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.072387  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.072392  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.072397  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.072550  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-061156","namespace":"kube-system","uid":"da3c627a-b324-482c-8416-cea88abe00ae","resourceVersion":"281","creationTimestamp":"2024-01-16T02:56:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a713ba27efa0d680cee11acec275764d","kubernetes.io/config.mirror":"a713ba27efa0d680cee11acec275764d","kubernetes.io/config.seen":"2024-01-16T02:56:48.475217310Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:56:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0116 02:58:28.072944  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:58:28.072957  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.072965  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.072971  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.074405  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:58:28.074418  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.074424  536361 round_trippers.go:580]     Audit-Id: b01b734b-19ab-43c7-9fe5-b0611d17c3ef
	I0116 02:58:28.074431  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.074436  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.074441  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.074448  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.074455  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.074592  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:58:28.074836  536361 pod_ready.go:92] pod "kube-apiserver-multinode-061156" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:28.074850  536361 pod_ready.go:81] duration metric: took 4.020483ms waiting for pod "kube-apiserver-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.074857  536361 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.074897  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-061156
	I0116 02:58:28.074905  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.074911  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.074917  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.076367  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:58:28.076382  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.076388  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.076396  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.076402  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.076407  536361 round_trippers.go:580]     Audit-Id: 3b2f2aba-5566-43b6-9914-1401455c8690
	I0116 02:58:28.076415  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.076423  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.076564  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-061156","namespace":"kube-system","uid":"5b792e14-d13e-43b8-a708-f27c31290eda","resourceVersion":"395","creationTimestamp":"2024-01-16T02:56:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b32ca35dfee6bdabe276b7d93aa2f570","kubernetes.io/config.mirror":"b32ca35dfee6bdabe276b7d93aa2f570","kubernetes.io/config.seen":"2024-01-16T02:56:54.246247549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:56:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0116 02:58:28.076941  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:58:28.076954  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.076961  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.076968  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.078373  536361 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:58:28.078392  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.078402  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.078411  536361 round_trippers.go:580]     Audit-Id: becdc0d7-be5e-4d6f-a3d6-4f281984ee5d
	I0116 02:58:28.078420  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.078431  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.078440  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.078451  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.078554  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:58:28.078797  536361 pod_ready.go:92] pod "kube-controller-manager-multinode-061156" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:28.078810  536361 pod_ready.go:81] duration metric: took 3.947286ms waiting for pod "kube-controller-manager-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.078819  536361 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vpjfj" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.253150  536361 request.go:629] Waited for 174.255659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vpjfj
	I0116 02:58:28.253211  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vpjfj
	I0116 02:58:28.253216  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.253224  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.253230  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.255600  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:28.255619  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.255626  536361 round_trippers.go:580]     Audit-Id: 99fc4f19-f006-4b94-bbbe-75f08ad7606f
	I0116 02:58:28.255634  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.255642  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.255650  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.255657  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.255664  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.255782  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vpjfj","generateName":"kube-proxy-","namespace":"kube-system","uid":"ee936686-319b-4f8e-91ae-3341cb23dd8b","resourceVersion":"480","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d3fcd685-a8b7-4613-b0c1-a2055037991b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d3fcd685-a8b7-4613-b0c1-a2055037991b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0116 02:58:28.453448  536361 request.go:629] Waited for 197.246888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:28.453510  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156-m02
	I0116 02:58:28.453515  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.453523  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.453529  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.455849  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:28.455881  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.455888  536361 round_trippers.go:580]     Audit-Id: 489726e5-d2e3-41b3-ba27-2df74c50c2bd
	I0116 02:58:28.455894  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.455899  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.455904  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.455909  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.455917  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.456061  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156-m02","uid":"10e3856f-73c2-4c02-a98e-441ca3759a3b","resourceVersion":"514","creationTimestamp":"2024-01-16T02:57:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_57_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I0116 02:58:28.456543  536361 pod_ready.go:92] pod "kube-proxy-vpjfj" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:28.456571  536361 pod_ready.go:81] duration metric: took 377.74478ms waiting for pod "kube-proxy-vpjfj" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.456584  536361 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xsg8g" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.653445  536361 request.go:629] Waited for 196.750769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsg8g
	I0116 02:58:28.653531  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsg8g
	I0116 02:58:28.653543  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.653553  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.653565  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.655751  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:28.655770  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.655777  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.655784  536361 round_trippers.go:580]     Audit-Id: 388bdadb-9584-445e-97b2-1b30aa3c0b5d
	I0116 02:58:28.655792  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.655800  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.655813  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.655824  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.655950  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xsg8g","generateName":"kube-proxy-","namespace":"kube-system","uid":"0e531a4d-783f-4c65-9580-2b8e43a88adb","resourceVersion":"390","creationTimestamp":"2024-01-16T02:57:06Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d3fcd685-a8b7-4613-b0c1-a2055037991b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d3fcd685-a8b7-4613-b0c1-a2055037991b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0116 02:58:28.853813  536361 request.go:629] Waited for 197.366572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:58:28.853889  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:58:28.853894  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:28.853902  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:28.853908  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:28.856143  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:28.856166  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:28.856176  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:28.856183  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:28.856195  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:28 GMT
	I0116 02:58:28.856204  536361 round_trippers.go:580]     Audit-Id: a21b362e-7235-4a5f-a10b-1326e7c082c4
	I0116 02:58:28.856210  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:28.856216  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:28.856329  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:58:28.856650  536361 pod_ready.go:92] pod "kube-proxy-xsg8g" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:28.856670  536361 pod_ready.go:81] duration metric: took 400.07311ms waiting for pod "kube-proxy-xsg8g" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:28.856679  536361 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:29.053637  536361 request.go:629] Waited for 196.87564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-061156
	I0116 02:58:29.053715  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-061156
	I0116 02:58:29.053720  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:29.053728  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:29.053735  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:29.056030  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:29.056051  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:29.056058  536361 round_trippers.go:580]     Audit-Id: 0f3df057-2012-4f9b-b976-bc6a20db6c64
	I0116 02:58:29.056064  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:29.056069  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:29.056075  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:29.056080  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:29.056085  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:29 GMT
	I0116 02:58:29.056237  536361 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-061156","namespace":"kube-system","uid":"9eee1777-3859-44ba-b059-2eb8b1aac78f","resourceVersion":"394","creationTimestamp":"2024-01-16T02:56:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bce42c0a4b713a3d4100ced7dd0a146a","kubernetes.io/config.mirror":"bce42c0a4b713a3d4100ced7dd0a146a","kubernetes.io/config.seen":"2024-01-16T02:56:54.246248691Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:56:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0116 02:58:29.253987  536361 request.go:629] Waited for 197.353214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:58:29.254059  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-061156
	I0116 02:58:29.254063  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:29.254082  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:29.254091  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:29.256377  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:29.256396  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:29.256403  536361 round_trippers.go:580]     Audit-Id: 60b55dab-d8b7-40e2-9e67-4877d8110dfb
	I0116 02:58:29.256409  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:29.256414  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:29.256418  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:29.256424  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:29.256429  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:29 GMT
	I0116 02:58:29.256578  536361 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:56:51Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0116 02:58:29.256899  536361 pod_ready.go:92] pod "kube-scheduler-multinode-061156" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:29.256917  536361 pod_ready.go:81] duration metric: took 400.22956ms waiting for pod "kube-scheduler-multinode-061156" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:29.256929  536361 pod_ready.go:38] duration metric: took 1.200889332s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:58:29.256950  536361 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:58:29.256996  536361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:58:29.267777  536361 system_svc.go:56] duration metric: took 10.816894ms WaitForService to wait for kubelet.
	I0116 02:58:29.267807  536361 kubeadm.go:581] duration metric: took 33.23313913s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:58:29.267894  536361 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:58:29.453349  536361 request.go:629] Waited for 185.353589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0116 02:58:29.453405  536361 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0116 02:58:29.453410  536361 round_trippers.go:469] Request Headers:
	I0116 02:58:29.453418  536361 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:29.453424  536361 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:29.455809  536361 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:29.455840  536361 round_trippers.go:577] Response Headers:
	I0116 02:58:29.455852  536361 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6bae505a-cede-4e1f-812a-ba72099fe716
	I0116 02:58:29.455862  536361 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3823efc0-3c97-4dad-a480-3bdddcfdc82d
	I0116 02:58:29.455872  536361 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:29 GMT
	I0116 02:58:29.455879  536361 round_trippers.go:580]     Audit-Id: 84ed0deb-a564-415e-b4ad-bae2708ac1af
	I0116 02:58:29.455888  536361 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:29.455893  536361 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:29.456140  536361 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"514"},"items":[{"metadata":{"name":"multinode-061156","uid":"fa759502-9d05-48ba-8fa3-443cff883e2a","resourceVersion":"405","creationTimestamp":"2024-01-16T02:56:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-061156","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-061156","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_56_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12840 chars]
	I0116 02:58:29.456839  536361 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0116 02:58:29.456862  536361 node_conditions.go:123] node cpu capacity is 8
	I0116 02:58:29.456876  536361 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0116 02:58:29.456885  536361 node_conditions.go:123] node cpu capacity is 8
	I0116 02:58:29.456894  536361 node_conditions.go:105] duration metric: took 188.993906ms to run NodePressure ...
	I0116 02:58:29.456910  536361 start.go:228] waiting for startup goroutines ...
	I0116 02:58:29.456940  536361 start.go:242] writing updated cluster config ...
	I0116 02:58:29.457314  536361 ssh_runner.go:195] Run: rm -f paused
	I0116 02:58:29.504382  536361 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 02:58:29.507059  536361 out.go:177] * Done! kubectl is now configured to use "multinode-061156" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 16 02:57:39 multinode-061156 crio[954]: time="2024-01-16 02:57:39.183671895Z" level=info msg="Starting container: 80521b1bf2ac5a6f84dfadf19060463aed153f678b34e6c562f5336c0cacab0d" id=9033a642-c3ad-4ae6-ac77-1e5f1264bf41 name=/runtime.v1.RuntimeService/StartContainer
	Jan 16 02:57:39 multinode-061156 crio[954]: time="2024-01-16 02:57:39.190644288Z" level=info msg="Created container 3efbfd7df8c837a322101e2101ef3e0ed3deeb56148e3ef61fd788f9973c68fe: kube-system/coredns-5dd5756b68-4rrfv/coredns" id=621229dd-cc4b-4c67-aad9-94a8997b4cb4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 02:57:39 multinode-061156 crio[954]: time="2024-01-16 02:57:39.191171791Z" level=info msg="Starting container: 3efbfd7df8c837a322101e2101ef3e0ed3deeb56148e3ef61fd788f9973c68fe" id=25b97f05-9570-44c7-ab94-9debf8ea13e2 name=/runtime.v1.RuntimeService/StartContainer
	Jan 16 02:57:39 multinode-061156 crio[954]: time="2024-01-16 02:57:39.201874596Z" level=info msg="Started container" PID=2317 containerID=80521b1bf2ac5a6f84dfadf19060463aed153f678b34e6c562f5336c0cacab0d description=kube-system/storage-provisioner/storage-provisioner id=9033a642-c3ad-4ae6-ac77-1e5f1264bf41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00140cabfd82a0eedc0ff7fb17b4b997a889d88f62978e29db39e84f2c510939
	Jan 16 02:57:39 multinode-061156 crio[954]: time="2024-01-16 02:57:39.204851954Z" level=info msg="Started container" PID=2332 containerID=3efbfd7df8c837a322101e2101ef3e0ed3deeb56148e3ef61fd788f9973c68fe description=kube-system/coredns-5dd5756b68-4rrfv/coredns id=25b97f05-9570-44c7-ab94-9debf8ea13e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9cc36200e1c796740971863ca88b6198d300ac595e648c1be64b4c17e4e5a722
	Jan 16 02:58:30 multinode-061156 crio[954]: time="2024-01-16 02:58:30.503565123Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-4dmmg/POD" id=a3fc205e-b947-4eff-ab96-4e0ad6624f32 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 16 02:58:30 multinode-061156 crio[954]: time="2024-01-16 02:58:30.503640788Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 16 02:58:30 multinode-061156 crio[954]: time="2024-01-16 02:58:30.517975556Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-4dmmg Namespace:default ID:7f829201bec1db76047da0bfe9b8d6507ca316cae582d2190da0ac9692b32d3b UID:b3c7bc25-7b93-41f2-927d-d52591df900c NetNS:/var/run/netns/982cd314-57ba-4e8d-8e2a-893ee6709315 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 16 02:58:30 multinode-061156 crio[954]: time="2024-01-16 02:58:30.518015215Z" level=info msg="Adding pod default_busybox-5bc68d56bd-4dmmg to CNI network \"kindnet\" (type=ptp)"
	Jan 16 02:58:30 multinode-061156 crio[954]: time="2024-01-16 02:58:30.528291357Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-4dmmg Namespace:default ID:7f829201bec1db76047da0bfe9b8d6507ca316cae582d2190da0ac9692b32d3b UID:b3c7bc25-7b93-41f2-927d-d52591df900c NetNS:/var/run/netns/982cd314-57ba-4e8d-8e2a-893ee6709315 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 16 02:58:30 multinode-061156 crio[954]: time="2024-01-16 02:58:30.528433337Z" level=info msg="Checking pod default_busybox-5bc68d56bd-4dmmg for CNI network kindnet (type=ptp)"
	Jan 16 02:58:30 multinode-061156 crio[954]: time="2024-01-16 02:58:30.532205026Z" level=info msg="Ran pod sandbox 7f829201bec1db76047da0bfe9b8d6507ca316cae582d2190da0ac9692b32d3b with infra container: default/busybox-5bc68d56bd-4dmmg/POD" id=a3fc205e-b947-4eff-ab96-4e0ad6624f32 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 16 02:58:30 multinode-061156 crio[954]: time="2024-01-16 02:58:30.533249657Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=979a15a3-ab71-4b66-bc82-88f6b207e105 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 02:58:30 multinode-061156 crio[954]: time="2024-01-16 02:58:30.533489651Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=979a15a3-ab71-4b66-bc82-88f6b207e105 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 02:58:30 multinode-061156 crio[954]: time="2024-01-16 02:58:30.534214315Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=29a8df76-8d5f-422e-a323-499298c7c2ec name=/runtime.v1.ImageService/PullImage
	Jan 16 02:58:30 multinode-061156 crio[954]: time="2024-01-16 02:58:30.535061207Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 16 02:58:31 multinode-061156 crio[954]: time="2024-01-16 02:58:31.269181692Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 16 02:58:32 multinode-061156 crio[954]: time="2024-01-16 02:58:32.993782782Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=29a8df76-8d5f-422e-a323-499298c7c2ec name=/runtime.v1.ImageService/PullImage
	Jan 16 02:58:32 multinode-061156 crio[954]: time="2024-01-16 02:58:32.994604049Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=637a22c8-e08f-4a2e-bc6e-0e00d34123f4 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 02:58:32 multinode-061156 crio[954]: time="2024-01-16 02:58:32.995290516Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=637a22c8-e08f-4a2e-bc6e-0e00d34123f4 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 02:58:32 multinode-061156 crio[954]: time="2024-01-16 02:58:32.996176331Z" level=info msg="Creating container: default/busybox-5bc68d56bd-4dmmg/busybox" id=6814cc0d-870c-4227-b233-919d78a2a1f9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 02:58:32 multinode-061156 crio[954]: time="2024-01-16 02:58:32.996363732Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 16 02:58:33 multinode-061156 crio[954]: time="2024-01-16 02:58:33.038783491Z" level=info msg="Created container ae869b8bbf5ef41e3c6c03a1bc7927660020a2692dec6eeca03dc5b2fca8ad22: default/busybox-5bc68d56bd-4dmmg/busybox" id=6814cc0d-870c-4227-b233-919d78a2a1f9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 02:58:33 multinode-061156 crio[954]: time="2024-01-16 02:58:33.039470864Z" level=info msg="Starting container: ae869b8bbf5ef41e3c6c03a1bc7927660020a2692dec6eeca03dc5b2fca8ad22" id=53b55a1f-a92b-45ef-82cb-7f0e97c48878 name=/runtime.v1.RuntimeService/StartContainer
	Jan 16 02:58:33 multinode-061156 crio[954]: time="2024-01-16 02:58:33.046660484Z" level=info msg="Started container" PID=2512 containerID=ae869b8bbf5ef41e3c6c03a1bc7927660020a2692dec6eeca03dc5b2fca8ad22 description=default/busybox-5bc68d56bd-4dmmg/busybox id=53b55a1f-a92b-45ef-82cb-7f0e97c48878 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f829201bec1db76047da0bfe9b8d6507ca316cae582d2190da0ac9692b32d3b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ae869b8bbf5ef       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   7f829201bec1d       busybox-5bc68d56bd-4dmmg
	3efbfd7df8c83       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      58 seconds ago       Running             coredns                   0                   9cc36200e1c79       coredns-5dd5756b68-4rrfv
	80521b1bf2ac5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      58 seconds ago       Running             storage-provisioner       0                   00140cabfd82a       storage-provisioner
	a8d828e361560       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   939decdb9175a       kube-proxy-xsg8g
	094be2f018f21       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   7e3f311495068       kindnet-86pdd
	aeccaeb5db85d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   3d01233609d3d       etcd-multinode-061156
	58c9c64357258       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   714ada2b5e275       kube-controller-manager-multinode-061156
	2abdd56261d18       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   95705bb1c6d2a       kube-apiserver-multinode-061156
	e6eb1cb38e1a5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   4f5d255426344       kube-scheduler-multinode-061156
	
	
	==> coredns [3efbfd7df8c837a322101e2101ef3e0ed3deeb56148e3ef61fd788f9973c68fe] <==
	[INFO] 10.244.0.3:58403 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093067s
	[INFO] 10.244.1.2:32834 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114614s
	[INFO] 10.244.1.2:51223 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001643049s
	[INFO] 10.244.1.2:44505 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084245s
	[INFO] 10.244.1.2:40499 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130285s
	[INFO] 10.244.1.2:39434 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00106683s
	[INFO] 10.244.1.2:59228 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000047733s
	[INFO] 10.244.1.2:55604 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064471s
	[INFO] 10.244.1.2:54627 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042265s
	[INFO] 10.244.0.3:47231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100747s
	[INFO] 10.244.0.3:35856 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073184s
	[INFO] 10.244.0.3:50927 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051494s
	[INFO] 10.244.0.3:41664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054783s
	[INFO] 10.244.1.2:33574 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113299s
	[INFO] 10.244.1.2:54344 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087229s
	[INFO] 10.244.1.2:52160 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075093s
	[INFO] 10.244.1.2:57735 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086175s
	[INFO] 10.244.0.3:38867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010049s
	[INFO] 10.244.0.3:37338 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113659s
	[INFO] 10.244.0.3:59374 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096103s
	[INFO] 10.244.0.3:51502 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090465s
	[INFO] 10.244.1.2:48566 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012411s
	[INFO] 10.244.1.2:35273 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088149s
	[INFO] 10.244.1.2:57944 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076261s
	[INFO] 10.244.1.2:56269 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00007187s
	
	
	==> describe nodes <==
	Name:               multinode-061156
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-061156
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=multinode-061156
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_56_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:56:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-061156
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:58:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:57:38 +0000   Tue, 16 Jan 2024 02:56:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:57:38 +0000   Tue, 16 Jan 2024 02:56:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:57:38 +0000   Tue, 16 Jan 2024 02:56:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:57:38 +0000   Tue, 16 Jan 2024 02:57:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-061156
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 af855ef8d5594f16aed3f2eae89f6f5a
	  System UUID:                8c3dbaee-eca7-41ff-b8f1-00e62571dd2c
	  Boot ID:                    cc6eb99d-2787-4545-a9c9-22d5006806a3
	  Kernel Version:             5.15.0-1048-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-4dmmg                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-4rrfv                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     90s
	  kube-system                 etcd-multinode-061156                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kindnet-86pdd                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      91s
	  kube-system                 kube-apiserver-multinode-061156             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-multinode-061156    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-xsg8g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-multinode-061156             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s (x4 over 109s)  kubelet          Node multinode-061156 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x3 over 109s)  kubelet          Node multinode-061156 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x3 over 109s)  kubelet          Node multinode-061156 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s                 kubelet          Node multinode-061156 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s                 kubelet          Node multinode-061156 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s                 kubelet          Node multinode-061156 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s                  node-controller  Node multinode-061156 event: Registered Node multinode-061156 in Controller
	  Normal  NodeReady                59s                  kubelet          Node multinode-061156 status is now: NodeReady
	
	
	Name:               multinode-061156-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-061156-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=multinode-061156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T02_57_55_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:57:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-061156-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:58:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:58:27 +0000   Tue, 16 Jan 2024 02:57:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:58:27 +0000   Tue, 16 Jan 2024 02:57:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:58:27 +0000   Tue, 16 Jan 2024 02:57:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:58:27 +0000   Tue, 16 Jan 2024 02:58:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-061156-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 eee676784da343fd9fb1b5b4b0c9a910
	  System UUID:                fde74f4f-5dcd-41c3-92f3-251603fda442
	  Boot ID:                    cc6eb99d-2787-4545-a9c9-22d5006806a3
	  Kernel Version:             5.15.0-1048-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-hwz9l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-j57x4               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      42s
	  kube-system                 kube-proxy-vpjfj            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  NodeHasSufficientMemory  42s (x5 over 44s)  kubelet          Node multinode-061156-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x5 over 44s)  kubelet          Node multinode-061156-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x5 over 44s)  kubelet          Node multinode-061156-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node multinode-061156-m02 event: Registered Node multinode-061156-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-061156-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.007353] FS-Cache: O-key=[8] 'd7a20f0200000000'
	[  +0.004940] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007945] FS-Cache: N-cookie d=00000000f7250940{9p.inode} n=000000006b0f1592
	[  +0.007364] FS-Cache: N-key=[8] 'd7a20f0200000000'
	[  +0.285216] FS-Cache: Duplicate cookie detected
	[  +0.004716] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.006747] FS-Cache: O-cookie d=00000000f7250940{9p.inode} n=00000000fab5c785
	[  +0.007358] FS-Cache: O-key=[8] 'dda20f0200000000'
	[  +0.004971] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007930] FS-Cache: N-cookie d=00000000f7250940{9p.inode} n=0000000041298e86
	[  +0.008749] FS-Cache: N-key=[8] 'dda20f0200000000'
	[Jan16 02:48] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[  +1.011963] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[  +2.015838] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[Jan16 02:49] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[  +8.191334] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[ +16.126801] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	[ +33.021533] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: da b9 ff e8 c2 1a 3a 9b f7 c5 8d d7 08 00
	
	
	==> etcd [aeccaeb5db85d432feb083ee64ea94cb9fbe7b299e3497af0eb1dafa14863c5f] <==
	{"level":"info","ts":"2024-01-16T02:56:49.236746Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-16T02:56:49.236873Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-16T02:56:49.237014Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T02:56:49.237397Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T02:56:50.021229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T02:56:50.021276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T02:56:50.021291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-16T02:56:50.021307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T02:56:50.021313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-16T02:56:50.021322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-16T02:56:50.02133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-16T02:56:50.022273Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:56:50.022935Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-061156 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T02:56:50.022973Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:56:50.022997Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:56:50.023203Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T02:56:50.023232Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:56:50.023268Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T02:56:50.023333Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:56:50.023374Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:56:50.024236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T02:56:50.024242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-01-16T02:57:45.505551Z","caller":"traceutil/trace.go:171","msg":"trace[1092283731] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"249.245304ms","start":"2024-01-16T02:57:45.256287Z","end":"2024-01-16T02:57:45.505532Z","steps":["trace[1092283731] 'process raft request'  (duration: 249.10084ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:57:45.679558Z","caller":"traceutil/trace.go:171","msg":"trace[2088981181] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"392.890973ms","start":"2024-01-16T02:57:45.28664Z","end":"2024-01-16T02:57:45.679531Z","steps":["trace[2088981181] 'process raft request'  (duration: 327.984376ms)","trace[2088981181] 'compare'  (duration: 64.804614ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T02:57:45.680055Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T02:57:45.286624Z","time spent":"393.005127ms","remote":"127.0.0.1:53114","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":553,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-061156\" mod_revision:402 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-061156\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-061156\" > >"}
	
	
	==> kernel <==
	 02:58:37 up  2:41,  0 users,  load average: 0.92, 1.03, 1.18
	Linux multinode-061156 5.15.0-1048-gcp #56~20.04.1-Ubuntu SMP Fri Nov 24 16:52:37 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [094be2f018f212ef0dd77b0f0311685603ebf65d2de5946e4696176d9653cbde] <==
	I0116 02:57:08.302282       1 main.go:116] setting mtu 1500 for CNI 
	I0116 02:57:08.302300       1 main.go:146] kindnetd IP family: "ipv4"
	I0116 02:57:08.302324       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0116 02:57:38.507210       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0116 02:57:38.514673       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 02:57:38.514701       1 main.go:227] handling current node
	I0116 02:57:48.528137       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 02:57:48.528160       1 main.go:227] handling current node
	I0116 02:57:58.540170       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 02:57:58.540200       1 main.go:227] handling current node
	I0116 02:57:58.540212       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 02:57:58.540219       1 main.go:250] Node multinode-061156-m02 has CIDR [10.244.1.0/24] 
	I0116 02:57:58.540435       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0116 02:58:08.553309       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 02:58:08.553342       1 main.go:227] handling current node
	I0116 02:58:08.553356       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 02:58:08.553363       1 main.go:250] Node multinode-061156-m02 has CIDR [10.244.1.0/24] 
	I0116 02:58:18.565824       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 02:58:18.565847       1 main.go:227] handling current node
	I0116 02:58:18.565859       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 02:58:18.565867       1 main.go:250] Node multinode-061156-m02 has CIDR [10.244.1.0/24] 
	I0116 02:58:28.586691       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 02:58:28.587513       1 main.go:227] handling current node
	I0116 02:58:28.587603       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 02:58:28.587651       1 main.go:250] Node multinode-061156-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2abdd56261d18b6195c0fdcedc1c83aa1ea002cab03b254dc4bade2a1a9f815c] <==
	I0116 02:56:51.514640       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0116 02:56:51.514665       1 aggregator.go:166] initial CRD sync complete...
	I0116 02:56:51.514682       1 autoregister_controller.go:141] Starting autoregister controller
	I0116 02:56:51.514687       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0116 02:56:51.514693       1 cache.go:39] Caches are synced for autoregister controller
	I0116 02:56:51.515296       1 controller.go:624] quota admission added evaluator for: namespaces
	I0116 02:56:51.600550       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0116 02:56:51.600582       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0116 02:56:51.606480       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 02:56:52.415172       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0116 02:56:52.418612       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0116 02:56:52.418632       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0116 02:56:52.837647       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 02:56:52.869521       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 02:56:52.908499       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0116 02:56:52.914533       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0116 02:56:52.915435       1 controller.go:624] quota admission added evaluator for: endpoints
	I0116 02:56:52.919264       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 02:56:53.526196       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0116 02:56:54.190040       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0116 02:56:54.199461       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0116 02:56:54.209813       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0116 02:57:06.884271       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0116 02:57:06.884271       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0116 02:57:07.134970       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [58c9c64357258a8538323c6a56db07a86f3d8faa1db81aaf4231d04454d14fa4] <==
	I0116 02:57:38.815812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.03µs"
	I0116 02:57:39.412553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.559µs"
	I0116 02:57:39.432689       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.719942ms"
	I0116 02:57:39.432868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.938µs"
	I0116 02:57:41.385790       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0116 02:57:55.403624       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-061156-m02\" does not exist"
	I0116 02:57:55.409533       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-061156-m02" podCIDRs=["10.244.1.0/24"]
	I0116 02:57:55.413916       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vpjfj"
	I0116 02:57:55.413943       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j57x4"
	I0116 02:57:56.387760       1 event.go:307] "Event occurred" object="multinode-061156-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-061156-m02 event: Registered Node multinode-061156-m02 in Controller"
	I0116 02:57:56.387880       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-061156-m02"
	I0116 02:58:27.726037       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-061156-m02"
	I0116 02:58:30.181713       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0116 02:58:30.191203       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-hwz9l"
	I0116 02:58:30.194638       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4dmmg"
	I0116 02:58:30.198456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.945795ms"
	I0116 02:58:30.203238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.733106ms"
	I0116 02:58:30.203331       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.291µs"
	I0116 02:58:30.209823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="79.16µs"
	I0116 02:58:30.214126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.416µs"
	I0116 02:58:31.404361       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-hwz9l" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-hwz9l"
	I0116 02:58:33.511051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.891793ms"
	I0116 02:58:33.511152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.759µs"
	I0116 02:58:33.989427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.732395ms"
	I0116 02:58:33.989540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.161µs"
	
	
	==> kube-proxy [a8d828e3615601d0df322948e3f9c1a16699adae21175ce15d6bf4a7ad6eeea9] <==
	I0116 02:57:08.236947       1 server_others.go:69] "Using iptables proxy"
	I0116 02:57:08.245477       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0116 02:57:08.303405       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0116 02:57:08.305703       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:57:08.305736       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0116 02:57:08.305741       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0116 02:57:08.305792       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:57:08.306055       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:57:08.306075       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:57:08.306712       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:57:08.306796       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:57:08.306728       1 config.go:315] "Starting node config controller"
	I0116 02:57:08.307164       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:57:08.306755       1 config.go:188] "Starting service config controller"
	I0116 02:57:08.307203       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:57:08.407737       1 shared_informer.go:318] Caches are synced for service config
	I0116 02:57:08.407751       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 02:57:08.407757       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e6eb1cb38e1a5aa73930c8922d869b4d8f633a8d6467791be490470101611f54] <==
	W0116 02:56:51.613426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:56:51.613506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 02:56:51.613505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 02:56:51.613570       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:56:51.613590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 02:56:51.613594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:56:51.613344       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 02:56:51.613611       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 02:56:51.613625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 02:56:51.613633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 02:56:51.613516       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:56:51.613751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:56:52.438477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:56:52.438509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 02:56:52.489018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:56:52.489068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 02:56:52.496493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 02:56:52.496534       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 02:56:52.502985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 02:56:52.503021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 02:56:52.514688       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:56:52.514728       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 02:56:52.681330       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:56:52.681360       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0116 02:56:54.503258       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 16 02:57:07 multinode-061156 kubelet[1583]: E0116 02:57:07.045954    1583 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 16 02:57:07 multinode-061156 kubelet[1583]: E0116 02:57:07.045996    1583 projected.go:198] Error preparing data for projected volume kube-api-access-cl4qt for pod kube-system/kube-proxy-xsg8g: configmap "kube-root-ca.crt" not found
	Jan 16 02:57:07 multinode-061156 kubelet[1583]: E0116 02:57:07.045956    1583 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 16 02:57:07 multinode-061156 kubelet[1583]: E0116 02:57:07.046100    1583 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e531a4d-783f-4c65-9580-2b8e43a88adb-kube-api-access-cl4qt podName:0e531a4d-783f-4c65-9580-2b8e43a88adb nodeName:}" failed. No retries permitted until 2024-01-16 02:57:07.546050379 +0000 UTC m=+13.379011865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cl4qt" (UniqueName: "kubernetes.io/projected/0e531a4d-783f-4c65-9580-2b8e43a88adb-kube-api-access-cl4qt") pod "kube-proxy-xsg8g" (UID: "0e531a4d-783f-4c65-9580-2b8e43a88adb") : configmap "kube-root-ca.crt" not found
	Jan 16 02:57:07 multinode-061156 kubelet[1583]: E0116 02:57:07.046106    1583 projected.go:198] Error preparing data for projected volume kube-api-access-stlvb for pod kube-system/kindnet-86pdd: configmap "kube-root-ca.crt" not found
	Jan 16 02:57:07 multinode-061156 kubelet[1583]: E0116 02:57:07.046174    1583 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73b1a04d-5339-4226-9d2b-5b574436acee-kube-api-access-stlvb podName:73b1a04d-5339-4226-9d2b-5b574436acee nodeName:}" failed. No retries permitted until 2024-01-16 02:57:07.546153875 +0000 UTC m=+13.379115362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-stlvb" (UniqueName: "kubernetes.io/projected/73b1a04d-5339-4226-9d2b-5b574436acee-kube-api-access-stlvb") pod "kindnet-86pdd" (UID: "73b1a04d-5339-4226-9d2b-5b574436acee") : configmap "kube-root-ca.crt" not found
	Jan 16 02:57:07 multinode-061156 kubelet[1583]: W0116 02:57:07.823039    1583 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81/crio-7e3f31149506879ab0d6ee3377bde21e1d2fd505309ac4bb6068f718b48d4b1c WatchSource:0}: Error finding container 7e3f31149506879ab0d6ee3377bde21e1d2fd505309ac4bb6068f718b48d4b1c: Status 404 returned error can't find the container with id 7e3f31149506879ab0d6ee3377bde21e1d2fd505309ac4bb6068f718b48d4b1c
	Jan 16 02:57:07 multinode-061156 kubelet[1583]: W0116 02:57:07.902121    1583 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81/crio-939decdb9175a49fbe842825bc57861f37e7fc73d363db3f2703ebf9181184bf WatchSource:0}: Error finding container 939decdb9175a49fbe842825bc57861f37e7fc73d363db3f2703ebf9181184bf: Status 404 returned error can't find the container with id 939decdb9175a49fbe842825bc57861f37e7fc73d363db3f2703ebf9181184bf
	Jan 16 02:57:08 multinode-061156 kubelet[1583]: I0116 02:57:08.360857    1583 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-86pdd" podStartSLOduration=2.360821754 podCreationTimestamp="2024-01-16 02:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:57:08.360500886 +0000 UTC m=+14.193462376" watchObservedRunningTime="2024-01-16 02:57:08.360821754 +0000 UTC m=+14.193783245"
	Jan 16 02:57:14 multinode-061156 kubelet[1583]: I0116 02:57:14.327185    1583 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xsg8g" podStartSLOduration=8.327137834 podCreationTimestamp="2024-01-16 02:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:57:08.401299706 +0000 UTC m=+14.234261238" watchObservedRunningTime="2024-01-16 02:57:14.327137834 +0000 UTC m=+20.160099318"
	Jan 16 02:57:38 multinode-061156 kubelet[1583]: I0116 02:57:38.785639    1583 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 16 02:57:38 multinode-061156 kubelet[1583]: I0116 02:57:38.806756    1583 topology_manager.go:215] "Topology Admit Handler" podUID="d6092a0e-384a-4e9a-92b1-f5a394a2eb25" podNamespace="kube-system" podName="coredns-5dd5756b68-4rrfv"
	Jan 16 02:57:38 multinode-061156 kubelet[1583]: I0116 02:57:38.807816    1583 topology_manager.go:215] "Topology Admit Handler" podUID="5ada5003-e754-4457-91d0-cee0ba6b3640" podNamespace="kube-system" podName="storage-provisioner"
	Jan 16 02:57:38 multinode-061156 kubelet[1583]: I0116 02:57:38.927789    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzj8b\" (UniqueName: \"kubernetes.io/projected/d6092a0e-384a-4e9a-92b1-f5a394a2eb25-kube-api-access-wzj8b\") pod \"coredns-5dd5756b68-4rrfv\" (UID: \"d6092a0e-384a-4e9a-92b1-f5a394a2eb25\") " pod="kube-system/coredns-5dd5756b68-4rrfv"
	Jan 16 02:57:38 multinode-061156 kubelet[1583]: I0116 02:57:38.927838    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6092a0e-384a-4e9a-92b1-f5a394a2eb25-config-volume\") pod \"coredns-5dd5756b68-4rrfv\" (UID: \"d6092a0e-384a-4e9a-92b1-f5a394a2eb25\") " pod="kube-system/coredns-5dd5756b68-4rrfv"
	Jan 16 02:57:38 multinode-061156 kubelet[1583]: I0116 02:57:38.927860    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2v2p\" (UniqueName: \"kubernetes.io/projected/5ada5003-e754-4457-91d0-cee0ba6b3640-kube-api-access-s2v2p\") pod \"storage-provisioner\" (UID: \"5ada5003-e754-4457-91d0-cee0ba6b3640\") " pod="kube-system/storage-provisioner"
	Jan 16 02:57:38 multinode-061156 kubelet[1583]: I0116 02:57:38.927882    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5ada5003-e754-4457-91d0-cee0ba6b3640-tmp\") pod \"storage-provisioner\" (UID: \"5ada5003-e754-4457-91d0-cee0ba6b3640\") " pod="kube-system/storage-provisioner"
	Jan 16 02:57:39 multinode-061156 kubelet[1583]: W0116 02:57:39.130540    1583 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81/crio-00140cabfd82a0eedc0ff7fb17b4b997a889d88f62978e29db39e84f2c510939 WatchSource:0}: Error finding container 00140cabfd82a0eedc0ff7fb17b4b997a889d88f62978e29db39e84f2c510939: Status 404 returned error can't find the container with id 00140cabfd82a0eedc0ff7fb17b4b997a889d88f62978e29db39e84f2c510939
	Jan 16 02:57:39 multinode-061156 kubelet[1583]: W0116 02:57:39.142238    1583 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81/crio-9cc36200e1c796740971863ca88b6198d300ac595e648c1be64b4c17e4e5a722 WatchSource:0}: Error finding container 9cc36200e1c796740971863ca88b6198d300ac595e648c1be64b4c17e4e5a722: Status 404 returned error can't find the container with id 9cc36200e1c796740971863ca88b6198d300ac595e648c1be64b4c17e4e5a722
	Jan 16 02:57:39 multinode-061156 kubelet[1583]: I0116 02:57:39.412350    1583 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4rrfv" podStartSLOduration=32.412303574 podCreationTimestamp="2024-01-16 02:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:57:39.412031444 +0000 UTC m=+45.244992934" watchObservedRunningTime="2024-01-16 02:57:39.412303574 +0000 UTC m=+45.245265064"
	Jan 16 02:57:39 multinode-061156 kubelet[1583]: I0116 02:57:39.434276    1583 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.434226886 podCreationTimestamp="2024-01-16 02:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:57:39.434197264 +0000 UTC m=+45.267158758" watchObservedRunningTime="2024-01-16 02:57:39.434226886 +0000 UTC m=+45.267188378"
	Jan 16 02:58:30 multinode-061156 kubelet[1583]: I0116 02:58:30.201032    1583 topology_manager.go:215] "Topology Admit Handler" podUID="b3c7bc25-7b93-41f2-927d-d52591df900c" podNamespace="default" podName="busybox-5bc68d56bd-4dmmg"
	Jan 16 02:58:30 multinode-061156 kubelet[1583]: I0116 02:58:30.212982    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w9vw\" (UniqueName: \"kubernetes.io/projected/b3c7bc25-7b93-41f2-927d-d52591df900c-kube-api-access-5w9vw\") pod \"busybox-5bc68d56bd-4dmmg\" (UID: \"b3c7bc25-7b93-41f2-927d-d52591df900c\") " pod="default/busybox-5bc68d56bd-4dmmg"
	Jan 16 02:58:30 multinode-061156 kubelet[1583]: W0116 02:58:30.530929    1583 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81/crio-7f829201bec1db76047da0bfe9b8d6507ca316cae582d2190da0ac9692b32d3b WatchSource:0}: Error finding container 7f829201bec1db76047da0bfe9b8d6507ca316cae582d2190da0ac9692b32d3b: Status 404 returned error can't find the container with id 7f829201bec1db76047da0bfe9b8d6507ca316cae582d2190da0ac9692b32d3b
	Jan 16 02:58:33 multinode-061156 kubelet[1583]: I0116 02:58:33.507027    1583 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-4dmmg" podStartSLOduration=1.046437209 podCreationTimestamp="2024-01-16 02:58:30 +0000 UTC" firstStartedPulling="2024-01-16 02:58:30.533656947 +0000 UTC m=+96.366618429" lastFinishedPulling="2024-01-16 02:58:32.994208914 +0000 UTC m=+98.827170388" observedRunningTime="2024-01-16 02:58:33.506856598 +0000 UTC m=+99.339818085" watchObservedRunningTime="2024-01-16 02:58:33.506989168 +0000 UTC m=+99.339950658"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-061156 -n multinode-061156
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-061156 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.21s)

                                                
                                    

Test pass (290/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 25.73
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.22
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 23.95
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.29.0-rc.2/json-events 21.36
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.22
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 1.31
30 TestBinaryMirror 0.81
31 TestOffline 78.41
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 148.05
38 TestAddons/parallel/Registry 15.52
40 TestAddons/parallel/InspektorGadget 10.73
41 TestAddons/parallel/MetricsServer 5.75
42 TestAddons/parallel/HelmTiller 11.54
44 TestAddons/parallel/CSI 104.55
45 TestAddons/parallel/Headlamp 14.26
46 TestAddons/parallel/CloudSpanner 5.87
47 TestAddons/parallel/LocalPath 15.14
48 TestAddons/parallel/NvidiaDevicePlugin 6.48
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.11
53 TestAddons/StoppedEnableDisable 12.13
54 TestCertOptions 29.13
55 TestCertExpiration 226.13
57 TestForceSystemdFlag 31.96
58 TestForceSystemdEnv 30.04
60 TestKVMDriverInstallOrUpdate 4.91
64 TestErrorSpam/setup 21.12
65 TestErrorSpam/start 0.64
66 TestErrorSpam/status 0.9
67 TestErrorSpam/pause 1.53
68 TestErrorSpam/unpause 1.54
69 TestErrorSpam/stop 1.38
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 69.48
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 27.2
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.06
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.57
81 TestFunctional/serial/CacheCmd/cache/add_local 1.96
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 33.55
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.37
92 TestFunctional/serial/LogsFileCmd 1.38
93 TestFunctional/serial/InvalidService 3.95
95 TestFunctional/parallel/ConfigCmd 0.48
96 TestFunctional/parallel/DashboardCmd 14.5
97 TestFunctional/parallel/DryRun 0.37
98 TestFunctional/parallel/InternationalLanguage 0.18
99 TestFunctional/parallel/StatusCmd 0.92
103 TestFunctional/parallel/ServiceCmdConnect 14.84
104 TestFunctional/parallel/AddonsCmd 0.19
105 TestFunctional/parallel/PersistentVolumeClaim 44.02
107 TestFunctional/parallel/SSHCmd 0.64
108 TestFunctional/parallel/CpCmd 1.68
109 TestFunctional/parallel/MySQL 29.2
110 TestFunctional/parallel/FileSync 0.3
111 TestFunctional/parallel/CertSync 1.76
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
119 TestFunctional/parallel/License 0.63
120 TestFunctional/parallel/Version/short 0.07
121 TestFunctional/parallel/Version/components 0.57
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.23
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
131 TestFunctional/parallel/ImageCommands/ImageBuild 4.16
132 TestFunctional/parallel/ImageCommands/Setup 1.97
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.27
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
138 TestFunctional/parallel/ProfileCmd/profile_list 0.42
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.29
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.66
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.85
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.32
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.9
152 TestFunctional/parallel/ServiceCmd/DeployApp 10.17
153 TestFunctional/parallel/MountCmd/any-port 8.8
154 TestFunctional/parallel/ServiceCmd/List 1.71
155 TestFunctional/parallel/MountCmd/specific-port 1.9
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
157 TestFunctional/parallel/MountCmd/VerifyCleanup 1.68
158 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
159 TestFunctional/parallel/ServiceCmd/Format 0.55
160 TestFunctional/parallel/ServiceCmd/URL 0.59
161 TestFunctional/delete_addon-resizer_images 0.36
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 89.52
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.74
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.56
174 TestJSONOutput/start/Command 67.47
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.69
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.59
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.72
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.23
199 TestKicCustomNetwork/create_custom_network 36.86
200 TestKicCustomNetwork/use_default_bridge_network 24.43
201 TestKicExistingNetwork 26.32
202 TestKicCustomSubnet 24.4
203 TestKicStaticIP 26.7
204 TestMainNoArgs 0.06
205 TestMinikubeProfile 53.92
208 TestMountStart/serial/StartWithMountFirst 5.98
209 TestMountStart/serial/VerifyMountFirst 0.27
210 TestMountStart/serial/StartWithMountSecond 5.91
211 TestMountStart/serial/VerifyMountSecond 0.26
212 TestMountStart/serial/DeleteFirst 1.58
213 TestMountStart/serial/VerifyMountPostDelete 0.25
214 TestMountStart/serial/Stop 1.18
215 TestMountStart/serial/RestartStopped 7.72
216 TestMountStart/serial/VerifyMountPostStop 0.26
219 TestMultiNode/serial/FreshStart2Nodes 118.36
220 TestMultiNode/serial/DeployApp2Nodes 5.33
222 TestMultiNode/serial/AddNode 45.82
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.29
225 TestMultiNode/serial/CopyFile 9.5
226 TestMultiNode/serial/StopNode 2.14
227 TestMultiNode/serial/StartAfterStop 11.14
228 TestMultiNode/serial/RestartKeepsNodes 116.69
229 TestMultiNode/serial/DeleteNode 4.67
230 TestMultiNode/serial/StopMultiNode 23.66
231 TestMultiNode/serial/RestartMultiNode 74.94
232 TestMultiNode/serial/ValidateNameConflict 26.25
237 TestPreload 173.25
239 TestScheduledStopUnix 99.87
242 TestInsufficientStorage 12.88
243 TestRunningBinaryUpgrade 66.29
245 TestKubernetesUpgrade 357.42
246 TestMissingContainerUpgrade 169.86
247 TestStoppedBinaryUpgrade/Setup 2.47
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
257 TestNoKubernetes/serial/StartWithK8s 29.56
258 TestStoppedBinaryUpgrade/Upgrade 124.09
259 TestNoKubernetes/serial/StartWithStopK8s 12.73
260 TestNoKubernetes/serial/Start 5.64
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
262 TestNoKubernetes/serial/ProfileList 0.87
263 TestNoKubernetes/serial/Stop 1.19
264 TestNoKubernetes/serial/StartNoArgs 7.04
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
267 TestPause/serial/Start 45.61
268 TestPause/serial/SecondStartNoReconfiguration 30.31
269 TestStoppedBinaryUpgrade/MinikubeLogs 0.83
270 TestPause/serial/Pause 0.8
271 TestPause/serial/VerifyStatus 0.36
272 TestPause/serial/Unpause 0.93
273 TestPause/serial/PauseAgain 0.93
274 TestPause/serial/DeletePaused 4.89
278 TestPause/serial/VerifyDeletedResources 15.05
283 TestNetworkPlugins/group/false 3.96
288 TestStartStop/group/old-k8s-version/serial/FirstStart 111.96
290 TestStartStop/group/embed-certs/serial/FirstStart 69.05
291 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.78
293 TestStartStop/group/old-k8s-version/serial/Stop 11.83
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
295 TestStartStop/group/old-k8s-version/serial/SecondStart 411.05
296 TestStartStop/group/embed-certs/serial/DeployApp 9.23
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
298 TestStartStop/group/embed-certs/serial/Stop 11.96
299 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
300 TestStartStop/group/embed-certs/serial/SecondStart 337.03
302 TestStartStop/group/no-preload/serial/FirstStart 57.76
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.59
305 TestStartStop/group/no-preload/serial/DeployApp 9.23
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
307 TestStartStop/group/no-preload/serial/Stop 11.81
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
309 TestStartStop/group/no-preload/serial/SecondStart 594.58
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.85
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 338.2
315 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
317 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
318 TestStartStop/group/embed-certs/serial/Pause 2.78
320 TestStartStop/group/newest-cni/serial/FirstStart 34.02
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
322 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
325 TestStartStop/group/newest-cni/serial/Stop 1.21
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
327 TestStartStop/group/newest-cni/serial/SecondStart 26.25
328 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
329 TestStartStop/group/old-k8s-version/serial/Pause 2.73
330 TestNetworkPlugins/group/auto/Start 70.73
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
334 TestStartStop/group/newest-cni/serial/Pause 2.95
335 TestNetworkPlugins/group/kindnet/Start 70.89
336 TestNetworkPlugins/group/auto/KubeletFlags 0.28
337 TestNetworkPlugins/group/auto/NetCatPod 9.19
338 TestNetworkPlugins/group/auto/DNS 0.13
339 TestNetworkPlugins/group/auto/Localhost 0.11
340 TestNetworkPlugins/group/auto/HairPin 0.1
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
343 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.23
346 TestNetworkPlugins/group/calico/Start 70.78
347 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
348 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
349 TestNetworkPlugins/group/custom-flannel/Start 62.35
350 TestNetworkPlugins/group/kindnet/DNS 0.16
351 TestNetworkPlugins/group/kindnet/Localhost 0.17
352 TestNetworkPlugins/group/kindnet/HairPin 0.15
353 TestNetworkPlugins/group/enable-default-cni/Start 71
354 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
355 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.18
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/KubeletFlags 0.28
358 TestNetworkPlugins/group/calico/NetCatPod 10.18
359 TestNetworkPlugins/group/custom-flannel/DNS 0.14
360 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
361 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
362 TestNetworkPlugins/group/calico/DNS 0.13
363 TestNetworkPlugins/group/calico/Localhost 0.12
364 TestNetworkPlugins/group/calico/HairPin 0.12
365 TestNetworkPlugins/group/flannel/Start 61.86
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.15
368 TestNetworkPlugins/group/bridge/Start 38.8
369 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
370 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
371 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
372 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
373 TestNetworkPlugins/group/bridge/NetCatPod 10.18
374 TestNetworkPlugins/group/bridge/DNS 0.13
375 TestNetworkPlugins/group/bridge/Localhost 0.1
376 TestNetworkPlugins/group/bridge/HairPin 0.1
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
379 TestNetworkPlugins/group/flannel/NetCatPod 8.18
380 TestNetworkPlugins/group/flannel/DNS 0.13
381 TestNetworkPlugins/group/flannel/Localhost 0.11
382 TestNetworkPlugins/group/flannel/HairPin 0.11
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
386 TestStartStop/group/no-preload/serial/Pause 2.66
x
+
TestDownloadOnly/v1.16.0/json-events (25.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-471353 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-471353 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (25.730818337s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (25.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-471353
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-471353: exit status 85 (79.193318ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-471353 | jenkins | v1.32.0 | 16 Jan 24 02:35 UTC |          |
	|         | -p download-only-471353        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:35:50
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:35:50.738237  450585 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:35:50.738357  450585 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:35:50.738366  450585 out.go:309] Setting ErrFile to fd 2...
	I0116 02:35:50.738371  450585 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:35:50.738574  450585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
	W0116 02:35:50.738692  450585 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17965-443749/.minikube/config/config.json: open /home/jenkins/minikube-integration/17965-443749/.minikube/config/config.json: no such file or directory
	I0116 02:35:50.739294  450585 out.go:303] Setting JSON to true
	I0116 02:35:50.740295  450585 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8297,"bootTime":1705364254,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:35:50.740365  450585 start.go:138] virtualization: kvm guest
	I0116 02:35:50.742963  450585 out.go:97] [download-only-471353] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:35:50.744504  450585 out.go:169] MINIKUBE_LOCATION=17965
	W0116 02:35:50.743079  450585 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball: no such file or directory
	I0116 02:35:50.743121  450585 notify.go:220] Checking for updates...
	I0116 02:35:50.747373  450585 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:35:50.748747  450585 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:35:50.750241  450585 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	I0116 02:35:50.751688  450585 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 02:35:50.754124  450585 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 02:35:50.754372  450585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:35:50.775826  450585 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:35:50.775965  450585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:35:50.827640  450585 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-16 02:35:50.819045517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:35:50.827785  450585 docker.go:295] overlay module found
	I0116 02:35:50.829596  450585 out.go:97] Using the docker driver based on user configuration
	I0116 02:35:50.829626  450585 start.go:298] selected driver: docker
	I0116 02:35:50.829634  450585 start.go:902] validating driver "docker" against <nil>
	I0116 02:35:50.829736  450585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:35:50.883637  450585 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-16 02:35:50.874914408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:35:50.883940  450585 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:35:50.884536  450585 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0116 02:35:50.884680  450585 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 02:35:50.886555  450585 out.go:169] Using Docker driver with root privileges
	I0116 02:35:50.888128  450585 cni.go:84] Creating CNI manager for ""
	I0116 02:35:50.888152  450585 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 02:35:50.888167  450585 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:35:50.888178  450585 start_flags.go:321] config:
	{Name:download-only-471353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-471353 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:35:50.889808  450585 out.go:97] Starting control plane node download-only-471353 in cluster download-only-471353
	I0116 02:35:50.889833  450585 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 02:35:50.891043  450585 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:35:50.891072  450585 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 02:35:50.891185  450585 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:35:50.906856  450585 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 02:35:50.907074  450585 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 02:35:50.907173  450585 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 02:35:51.308667  450585 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0116 02:35:51.308702  450585 cache.go:56] Caching tarball of preloaded images
	I0116 02:35:51.308895  450585 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 02:35:51.311101  450585 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0116 02:35:51.311137  450585 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:35:51.429846  450585 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0116 02:36:03.858892  450585 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:36:03.858998  450585 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:36:03.972070  450585 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 02:36:04.764349  450585 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0116 02:36:04.764722  450585 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/download-only-471353/config.json ...
	I0116 02:36:04.764754  450585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/download-only-471353/config.json: {Name:mk5765758ab4d4394f12c3c5f3fa3243baf46166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:36:04.764932  450585 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 02:36:04.765094  450585 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17965-443749/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-471353"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-471353
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (23.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-198405 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-198405 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (23.949045269s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (23.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-198405
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-198405: exit status 85 (78.970667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-471353 | jenkins | v1.32.0 | 16 Jan 24 02:35 UTC |                     |
	|         | -p download-only-471353        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:36 UTC | 16 Jan 24 02:36 UTC |
	| delete  | -p download-only-471353        | download-only-471353 | jenkins | v1.32.0 | 16 Jan 24 02:36 UTC | 16 Jan 24 02:36 UTC |
	| start   | -o=json --download-only        | download-only-198405 | jenkins | v1.32.0 | 16 Jan 24 02:36 UTC |                     |
	|         | -p download-only-198405        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:36:16
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:36:16.910386  450926 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:36:16.910694  450926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:36:16.910706  450926 out.go:309] Setting ErrFile to fd 2...
	I0116 02:36:16.910711  450926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:36:16.910900  450926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
	I0116 02:36:16.911493  450926 out.go:303] Setting JSON to true
	I0116 02:36:16.912580  450926 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8323,"bootTime":1705364254,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:36:16.912658  450926 start.go:138] virtualization: kvm guest
	I0116 02:36:16.915142  450926 out.go:97] [download-only-198405] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:36:16.916767  450926 out.go:169] MINIKUBE_LOCATION=17965
	I0116 02:36:16.915342  450926 notify.go:220] Checking for updates...
	I0116 02:36:16.920004  450926 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:36:16.921768  450926 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:36:16.923189  450926 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	I0116 02:36:16.924572  450926 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 02:36:16.927151  450926 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 02:36:16.927388  450926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:36:16.947706  450926 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:36:16.947817  450926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:36:16.998428  450926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-16 02:36:16.989302837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:36:16.998536  450926 docker.go:295] overlay module found
	I0116 02:36:17.000724  450926 out.go:97] Using the docker driver based on user configuration
	I0116 02:36:17.000767  450926 start.go:298] selected driver: docker
	I0116 02:36:17.000776  450926 start.go:902] validating driver "docker" against <nil>
	I0116 02:36:17.000872  450926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:36:17.050091  450926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-16 02:36:17.042401797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:36:17.050249  450926 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:36:17.050782  450926 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0116 02:36:17.050921  450926 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 02:36:17.053012  450926 out.go:169] Using Docker driver with root privileges
	I0116 02:36:17.054570  450926 cni.go:84] Creating CNI manager for ""
	I0116 02:36:17.054592  450926 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 02:36:17.054603  450926 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:36:17.054615  450926 start_flags.go:321] config:
	{Name:download-only-198405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-198405 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:36:17.056227  450926 out.go:97] Starting control plane node download-only-198405 in cluster download-only-198405
	I0116 02:36:17.056251  450926 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 02:36:17.057747  450926 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:36:17.057774  450926 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:36:17.057878  450926 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:36:17.073790  450926 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 02:36:17.073918  450926 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 02:36:17.073933  450926 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0116 02:36:17.073937  450926 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0116 02:36:17.073947  450926 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 02:36:17.483629  450926 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 02:36:17.483674  450926 cache.go:56] Caching tarball of preloaded images
	I0116 02:36:17.483861  450926 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:36:17.485888  450926 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0116 02:36:17.485913  450926 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:36:17.595421  450926 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 02:36:30.930346  450926 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:36:30.930443  450926 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:36:31.861853  450926 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:36:31.862272  450926 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/download-only-198405/config.json ...
	I0116 02:36:31.862318  450926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/download-only-198405/config.json: {Name:mk947f9aefca9743af192498f483bd996c430d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:36:31.862536  450926 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:36:31.862692  450926 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17965-443749/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-198405"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-198405
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (21.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-734827 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-734827 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (21.362293027s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (21.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-734827
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-734827: exit status 85 (80.710267ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-471353 | jenkins | v1.32.0 | 16 Jan 24 02:35 UTC |                     |
	|         | -p download-only-471353           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:36 UTC | 16 Jan 24 02:36 UTC |
	| delete  | -p download-only-471353           | download-only-471353 | jenkins | v1.32.0 | 16 Jan 24 02:36 UTC | 16 Jan 24 02:36 UTC |
	| start   | -o=json --download-only           | download-only-198405 | jenkins | v1.32.0 | 16 Jan 24 02:36 UTC |                     |
	|         | -p download-only-198405           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:36 UTC | 16 Jan 24 02:36 UTC |
	| delete  | -p download-only-198405           | download-only-198405 | jenkins | v1.32.0 | 16 Jan 24 02:36 UTC | 16 Jan 24 02:36 UTC |
	| start   | -o=json --download-only           | download-only-734827 | jenkins | v1.32.0 | 16 Jan 24 02:36 UTC |                     |
	|         | -p download-only-734827           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:36:41
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:36:41.297895  451256 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:36:41.298062  451256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:36:41.298076  451256 out.go:309] Setting ErrFile to fd 2...
	I0116 02:36:41.298085  451256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:36:41.298270  451256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
	I0116 02:36:41.298823  451256 out.go:303] Setting JSON to true
	I0116 02:36:41.299805  451256 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8347,"bootTime":1705364254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:36:41.299874  451256 start.go:138] virtualization: kvm guest
	I0116 02:36:41.302218  451256 out.go:97] [download-only-734827] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:36:41.303810  451256 out.go:169] MINIKUBE_LOCATION=17965
	I0116 02:36:41.302385  451256 notify.go:220] Checking for updates...
	I0116 02:36:41.306615  451256 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:36:41.308357  451256 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:36:41.309873  451256 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	I0116 02:36:41.311301  451256 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 02:36:41.314217  451256 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 02:36:41.314508  451256 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:36:41.335025  451256 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:36:41.335132  451256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:36:41.384154  451256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-16 02:36:41.375815392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:36:41.384326  451256 docker.go:295] overlay module found
	I0116 02:36:41.386248  451256 out.go:97] Using the docker driver based on user configuration
	I0116 02:36:41.386282  451256 start.go:298] selected driver: docker
	I0116 02:36:41.386290  451256 start.go:902] validating driver "docker" against <nil>
	I0116 02:36:41.386389  451256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:36:41.435234  451256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-16 02:36:41.427438346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:36:41.435417  451256 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:36:41.435931  451256 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0116 02:36:41.436116  451256 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 02:36:41.438142  451256 out.go:169] Using Docker driver with root privileges
	I0116 02:36:41.439582  451256 cni.go:84] Creating CNI manager for ""
	I0116 02:36:41.439602  451256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 02:36:41.439613  451256 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:36:41.439628  451256 start_flags.go:321] config:
	{Name:download-only-734827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-734827 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:36:41.441075  451256 out.go:97] Starting control plane node download-only-734827 in cluster download-only-734827
	I0116 02:36:41.441093  451256 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 02:36:41.442389  451256 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:36:41.442413  451256 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 02:36:41.442512  451256 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:36:41.457470  451256 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 02:36:41.457616  451256 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 02:36:41.457645  451256 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0116 02:36:41.457655  451256 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0116 02:36:41.457668  451256 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 02:36:41.866902  451256 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0116 02:36:41.866938  451256 cache.go:56] Caching tarball of preloaded images
	I0116 02:36:41.867144  451256 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 02:36:41.869141  451256 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0116 02:36:41.869161  451256 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:36:41.981342  451256 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0116 02:36:53.267316  451256 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:36:53.267429  451256 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17965-443749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:36:54.083185  451256 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0116 02:36:54.083555  451256 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/download-only-734827/config.json ...
	I0116 02:36:54.083585  451256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/download-only-734827/config.json: {Name:mk82b4e4304bbf26eaa4e4d7a23146b433274fb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:36:54.083779  451256 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 02:36:54.083940  451256 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17965-443749/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-734827"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-734827
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.31s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-254883 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-254883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-254883
--- PASS: TestDownloadOnlyKic (1.31s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-207552 --alsologtostderr --binary-mirror http://127.0.0.1:41525 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-207552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-207552
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (78.41s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-486634 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-486634 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m14.740425948s)
helpers_test.go:175: Cleaning up "offline-crio-486634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-486634
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-486634: (3.666155484s)
--- PASS: TestOffline (78.41s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-411655
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-411655: exit status 85 (70.169264ms)

                                                
                                                
-- stdout --
	* Profile "addons-411655" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-411655"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-411655
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-411655: exit status 85 (68.033612ms)

                                                
                                                
-- stdout --
	* Profile "addons-411655" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-411655"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (148.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-411655 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-411655 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m28.05161765s)
--- PASS: TestAddons/Setup (148.05s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 20.982537ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-96z6n" [a9935a40-1774-4d34-846a-3f21c4e26b94] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005286748s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kkkjz" [8796e73d-33b4-46b2-b5e3-48bec46545b4] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004637291s
addons_test.go:340: (dbg) Run:  kubectl --context addons-411655 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-411655 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-411655 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.691573261s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 ip
2024/01/16 02:39:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9gfjx" [65f1a9a8-b4b8-4689-99f8-41073d000209] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004664787s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-411655
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-411655: (5.722275537s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.919324ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-m8lg8" [1bf5020c-76ff-4dcd-bff4-03851042ddaa] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004420042s
addons_test.go:415: (dbg) Run:  kubectl --context addons-411655 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.54s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 19.299252ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-bjttm" [65952226-2c10-4145-9f5a-3a8193ea8a97] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005319925s
addons_test.go:473: (dbg) Run:  kubectl --context addons-411655 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-411655 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.987781687s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.54s)

                                                
                                    
x
+
TestAddons/parallel/CSI (104.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 21.533925ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-411655 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-411655 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [639a8e32-0469-41d7-8248-af76a6a5a60a] Pending
helpers_test.go:344: "task-pv-pod" [639a8e32-0469-41d7-8248-af76a6a5a60a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [639a8e32-0469-41d7-8248-af76a6a5a60a] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.00374274s
addons_test.go:584: (dbg) Run:  kubectl --context addons-411655 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-411655 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-411655 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-411655 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-411655 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-411655 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-411655 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [51c7e8b8-7bf8-49a0-81dc-a2842cdb1922] Pending
helpers_test.go:344: "task-pv-pod-restore" [51c7e8b8-7bf8-49a0-81dc-a2842cdb1922] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [51c7e8b8-7bf8-49a0-81dc-a2842cdb1922] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003662053s
addons_test.go:626: (dbg) Run:  kubectl --context addons-411655 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-411655 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-411655 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-411655 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.567592404s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (104.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-411655 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-411655 --alsologtostderr -v=1: (1.251636623s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-9t5sd" [0efc85d8-e993-4ecf-9a17-a2bc2f9f0b7f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-9t5sd" [0efc85d8-e993-4ecf-9a17-a2bc2f9f0b7f] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004154184s
--- PASS: TestAddons/parallel/Headlamp (14.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-hsr6s" [0a0bd112-e893-499a-928c-3d240f00d195] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003726786s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-411655
--- PASS: TestAddons/parallel/CloudSpanner (5.87s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-411655 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-411655 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411655 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [919f5556-c10a-400b-a51c-b61cb4a1bbef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [919f5556-c10a-400b-a51c-b61cb4a1bbef] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [919f5556-c10a-400b-a51c-b61cb4a1bbef] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003782301s
addons_test.go:891: (dbg) Run:  kubectl --context addons-411655 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 ssh "cat /opt/local-path-provisioner/pvc-b9b3dea2-21d2-4d07-abee-78e9d4e666b6_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-411655 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-411655 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-411655 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (15.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tr95k" [14e5b74a-4026-4787-94c2-6fa1eeb1e161] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004263017s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-411655
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-8b6mj" [c330c611-e576-409d-9261-222558d1b3c8] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003907166s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-411655 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-411655 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-411655
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-411655: (11.840893068s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-411655
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-411655
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-411655
--- PASS: TestAddons/StoppedEnableDisable (12.13s)

                                                
                                    
x
+
TestCertOptions (29.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-669726 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-669726 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.424995316s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-669726 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-669726 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-669726 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-669726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-669726
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-669726: (2.078054826s)
--- PASS: TestCertOptions (29.13s)

                                                
                                    
x
+
TestCertExpiration (226.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-232426 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-232426 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.57788614s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-232426 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-232426 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.248283202s)
helpers_test.go:175: Cleaning up "cert-expiration-232426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-232426
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-232426: (2.305149767s)
--- PASS: TestCertExpiration (226.13s)

                                                
                                    
x
+
TestForceSystemdFlag (31.96s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-893906 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-893906 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.238698477s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-893906 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-893906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-893906
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-893906: (2.424679947s)
--- PASS: TestForceSystemdFlag (31.96s)

                                                
                                    
x
+
TestForceSystemdEnv (30.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-022165 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-022165 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.832557316s)
helpers_test.go:175: Cleaning up "force-systemd-env-022165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-022165
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-022165: (4.206774222s)
--- PASS: TestForceSystemdEnv (30.04s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.91s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.91s)

                                                
                                    
x
+
TestErrorSpam/setup (21.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-583047 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-583047 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-583047 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-583047 --driver=docker  --container-runtime=crio: (21.120545155s)
--- PASS: TestErrorSpam/setup (21.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 stop: (1.175457055s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583047 --log_dir /tmp/nospam-583047 stop
--- PASS: TestErrorSpam/stop (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17965-443749/.minikube/files/etc/test/nested/copy/450573/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380867 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0116 02:44:33.715367  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:44:33.721154  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:44:33.731448  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:44:33.751732  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:44:33.792061  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:44:33.872339  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:44:34.032714  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:44:34.353259  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:44:34.994196  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:44:36.274529  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-380867 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.482325999s)
--- PASS: TestFunctional/serial/StartWithProxy (69.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380867 --alsologtostderr -v=8
E0116 02:44:38.835440  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:44:43.956101  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 02:44:54.196520  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-380867 --alsologtostderr -v=8: (27.194336318s)
functional_test.go:659: soft start took 27.195194847s for "functional-380867" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-380867 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-380867 /tmp/TestFunctionalserialCacheCmdcacheadd_local1128937230/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 cache add minikube-local-cache-test:functional-380867
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-380867 cache add minikube-local-cache-test:functional-380867: (1.622535214s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 cache delete minikube-local-cache-test:functional-380867
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-380867
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380867 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (284.728321ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 kubectl -- --context functional-380867 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-380867 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380867 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0116 02:45:14.677488  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-380867 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.545564217s)
functional_test.go:757: restart took 33.545764137s for "functional-380867" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-380867 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-380867 logs: (1.3712649s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 logs --file /tmp/TestFunctionalserialLogsFileCmd2997598793/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-380867 logs --file /tmp/TestFunctionalserialLogsFileCmd2997598793/001/logs.txt: (1.381985817s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-380867 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-380867
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-380867: exit status 115 (346.836622ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32056 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-380867 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380867 config get cpus: exit status 14 (102.283327ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380867 config get cpus: exit status 14 (65.394391ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-380867 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-380867 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 487611: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380867 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-380867 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (166.447563ms)

                                                
                                                
-- stdout --
	* [functional-380867] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:46:24.022678  487206 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:46:24.022940  487206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:24.022950  487206 out.go:309] Setting ErrFile to fd 2...
	I0116 02:46:24.022955  487206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:24.023138  487206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
	I0116 02:46:24.023696  487206 out.go:303] Setting JSON to false
	I0116 02:46:24.024855  487206 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8930,"bootTime":1705364254,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:46:24.024933  487206 start.go:138] virtualization: kvm guest
	I0116 02:46:24.028195  487206 out.go:177] * [functional-380867] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:46:24.029839  487206 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:46:24.029886  487206 notify.go:220] Checking for updates...
	I0116 02:46:24.031965  487206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:46:24.033471  487206 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:46:24.034804  487206 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	I0116 02:46:24.036238  487206 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:46:24.037608  487206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:46:24.039514  487206 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:46:24.040002  487206 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:46:24.061801  487206 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:46:24.061913  487206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:46:24.117055  487206 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2024-01-16 02:46:24.108595307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:46:24.117171  487206 docker.go:295] overlay module found
	I0116 02:46:24.119162  487206 out.go:177] * Using the docker driver based on existing profile
	I0116 02:46:24.120608  487206 start.go:298] selected driver: docker
	I0116 02:46:24.120626  487206 start.go:902] validating driver "docker" against &{Name:functional-380867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-380867 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:46:24.120703  487206 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:46:24.122851  487206 out.go:177] 
	W0116 02:46:24.124350  487206 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0116 02:46:24.125603  487206 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380867 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380867 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-380867 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.136838ms)

                                                
                                                
-- stdout --
	* [functional-380867] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:46:19.813869  486143 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:46:19.814002  486143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:19.814011  486143 out.go:309] Setting ErrFile to fd 2...
	I0116 02:46:19.814015  486143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:19.814349  486143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
	I0116 02:46:19.814987  486143 out.go:303] Setting JSON to false
	I0116 02:46:19.816078  486143 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8926,"bootTime":1705364254,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:46:19.816151  486143 start.go:138] virtualization: kvm guest
	I0116 02:46:19.818789  486143 out.go:177] * [functional-380867] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0116 02:46:19.820526  486143 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:46:19.820598  486143 notify.go:220] Checking for updates...
	I0116 02:46:19.822191  486143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:46:19.823643  486143 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 02:46:19.825128  486143 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	I0116 02:46:19.826808  486143 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:46:19.828303  486143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:46:19.830467  486143 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:46:19.831232  486143 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:46:19.855753  486143 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:46:19.855888  486143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:46:19.914166  486143 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2024-01-16 02:46:19.904481417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:46:19.914262  486143 docker.go:295] overlay module found
	I0116 02:46:19.916712  486143 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0116 02:46:19.918301  486143 start.go:298] selected driver: docker
	I0116 02:46:19.918318  486143 start.go:902] validating driver "docker" against &{Name:functional-380867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-380867 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:46:19.918453  486143 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:46:19.920957  486143 out.go:177] 
	W0116 02:46:19.922351  486143 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0116 02:46:19.923752  486143 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-380867 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-380867 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-b8dvp" [d1b8504a-25ff-477a-a89b-450395630145] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-b8dvp" [d1b8504a-25ff-477a-a89b-450395630145] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.004691951s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30098
functional_test.go:1674: http://192.168.49.2:30098: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-b8dvp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30098
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (14.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d078e8e2-ac36-42de-95cf-acf08344a2ee] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.020164223s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-380867 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-380867 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-380867 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-380867 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-380867 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a50df207-b33a-41b8-882a-dfa667059a10] Pending
helpers_test.go:344: "sp-pod" [a50df207-b33a-41b8-882a-dfa667059a10] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a50df207-b33a-41b8-882a-dfa667059a10] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004381334s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-380867 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-380867 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-380867 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b843b5fc-eef6-4824-84ba-74808b3f73e6] Pending
helpers_test.go:344: "sp-pod" [b843b5fc-eef6-4824-84ba-74808b3f73e6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b843b5fc-eef6-4824-84ba-74808b3f73e6] Running
2024/01/16 02:46:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004244055s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-380867 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh -n functional-380867 "sudo cat /home/docker/cp-test.txt"
E0116 02:45:55.638535  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 cp functional-380867:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd355247807/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh -n functional-380867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh -n functional-380867 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-380867 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-fw92m" [9705ddb5-3d23-4e97-8848-62de5b830c9f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-fw92m" [9705ddb5-3d23-4e97-8848-62de5b830c9f] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004409286s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-380867 exec mysql-859648c796-fw92m -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-380867 exec mysql-859648c796-fw92m -- mysql -ppassword -e "show databases;": exit status 1 (125.521125ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-380867 exec mysql-859648c796-fw92m -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-380867 exec mysql-859648c796-fw92m -- mysql -ppassword -e "show databases;": exit status 1 (121.2325ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-380867 exec mysql-859648c796-fw92m -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-380867 exec mysql-859648c796-fw92m -- mysql -ppassword -e "show databases;": exit status 1 (311.064896ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-380867 exec mysql-859648c796-fw92m -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-380867 exec mysql-859648c796-fw92m -- mysql -ppassword -e "show databases;": exit status 1 (163.726105ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-380867 exec mysql-859648c796-fw92m -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/450573/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "sudo cat /etc/test/nested/copy/450573/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/450573.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "sudo cat /etc/ssl/certs/450573.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/450573.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "sudo cat /usr/share/ca-certificates/450573.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/4505732.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "sudo cat /etc/ssl/certs/4505732.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/4505732.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "sudo cat /usr/share/ca-certificates/4505732.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-380867 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380867 ssh "sudo systemctl is-active docker": exit status 1 (348.547543ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380867 ssh "sudo systemctl is-active containerd": exit status 1 (306.915222ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-380867 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-380867 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-380867 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-380867 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 481989: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-380867 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-380867 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [19fe3210-2513-482e-bcbf-360be2c1d173] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [19fe3210-2513-482e-bcbf-360be2c1d173] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004297835s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-380867 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-380867
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-380867 image ls --format short --alsologtostderr:
I0116 02:46:33.410203  489961 out.go:296] Setting OutFile to fd 1 ...
I0116 02:46:33.410551  489961 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:46:33.410566  489961 out.go:309] Setting ErrFile to fd 2...
I0116 02:46:33.410574  489961 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:46:33.410888  489961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
I0116 02:46:33.411837  489961 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:46:33.412008  489961 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:46:33.412548  489961 cli_runner.go:164] Run: docker container inspect functional-380867 --format={{.State.Status}}
I0116 02:46:33.434016  489961 ssh_runner.go:195] Run: systemctl --version
I0116 02:46:33.434127  489961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-380867
I0116 02:46:33.459458  489961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/functional-380867/id_rsa Username:docker}
I0116 02:46:33.557570  489961 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-380867 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | 529b5644c430c | 44.4MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | a8758716bb6aa | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-380867  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-380867 image ls --format table --alsologtostderr:
I0116 02:46:34.090824  490240 out.go:296] Setting OutFile to fd 1 ...
I0116 02:46:34.090962  490240 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:46:34.090972  490240 out.go:309] Setting ErrFile to fd 2...
I0116 02:46:34.090977  490240 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:46:34.091218  490240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
I0116 02:46:34.091841  490240 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:46:34.091959  490240 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:46:34.092397  490240 cli_runner.go:164] Run: docker container inspect functional-380867 --format={{.State.Status}}
I0116 02:46:34.111837  490240 ssh_runner.go:195] Run: systemctl --version
I0116 02:46:34.111900  490240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-380867
I0116 02:46:34.128761  490240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/functional-380867/id_rsa Username:docker}
I0116 02:46:34.220428  490240 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-380867 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["regis
try.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a
315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e0
9b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c","docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-380867"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120e
a7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924
ff","repoDigests":["docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44405005"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"350
b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-380867 image ls --format json --alsologtostderr:
I0116 02:46:33.826766  490143 out.go:296] Setting OutFile to fd 1 ...
I0116 02:46:33.827047  490143 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:46:33.827057  490143 out.go:309] Setting ErrFile to fd 2...
I0116 02:46:33.827064  490143 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:46:33.827353  490143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
I0116 02:46:33.828211  490143 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:46:33.828390  490143 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:46:33.829035  490143 cli_runner.go:164] Run: docker container inspect functional-380867 --format={{.State.Status}}
I0116 02:46:33.847838  490143 ssh_runner.go:195] Run: systemctl --version
I0116 02:46:33.847909  490143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-380867
I0116 02:46:33.867519  490143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/functional-380867/id_rsa Username:docker}
I0116 02:46:33.960949  490143 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-380867 image ls --format yaml --alsologtostderr:
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests:
- docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "44405005"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-380867
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-380867 image ls --format yaml --alsologtostderr:
I0116 02:46:33.536118  490038 out.go:296] Setting OutFile to fd 1 ...
I0116 02:46:33.536270  490038 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:46:33.536285  490038 out.go:309] Setting ErrFile to fd 2...
I0116 02:46:33.536292  490038 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:46:33.536632  490038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
I0116 02:46:33.537350  490038 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:46:33.537490  490038 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:46:33.537970  490038 cli_runner.go:164] Run: docker container inspect functional-380867 --format={{.State.Status}}
I0116 02:46:33.558756  490038 ssh_runner.go:195] Run: systemctl --version
I0116 02:46:33.558822  490038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-380867
I0116 02:46:33.580199  490038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/functional-380867/id_rsa Username:docker}
I0116 02:46:33.701099  490038 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380867 ssh pgrep buildkitd: exit status 1 (297.903679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image build -t localhost/my-image:functional-380867 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-380867 image build -t localhost/my-image:functional-380867 testdata/build --alsologtostderr: (3.632874032s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-380867 image build -t localhost/my-image:functional-380867 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d1694319a85
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-380867
--> 0b9f19dd408
Successfully tagged localhost/my-image:functional-380867
0b9f19dd4085576e31486ed551787634c3ca4221fb64039ba5a535547071496c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-380867 image build -t localhost/my-image:functional-380867 testdata/build --alsologtostderr:
I0116 02:46:33.991834  490202 out.go:296] Setting OutFile to fd 1 ...
I0116 02:46:33.992008  490202 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:46:33.992018  490202 out.go:309] Setting ErrFile to fd 2...
I0116 02:46:33.992023  490202 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:46:33.992216  490202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
I0116 02:46:33.992867  490202 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:46:33.993483  490202 config.go:182] Loaded profile config "functional-380867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:46:33.993906  490202 cli_runner.go:164] Run: docker container inspect functional-380867 --format={{.State.Status}}
I0116 02:46:34.013793  490202 ssh_runner.go:195] Run: systemctl --version
I0116 02:46:34.013848  490202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-380867
I0116 02:46:34.033734  490202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/functional-380867/id_rsa Username:docker}
I0116 02:46:34.152446  490202 build_images.go:151] Building image from path: /tmp/build.1012046513.tar
I0116 02:46:34.152525  490202 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0116 02:46:34.161518  490202 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1012046513.tar
I0116 02:46:34.164828  490202 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1012046513.tar: stat -c "%s %y" /var/lib/minikube/build/build.1012046513.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1012046513.tar': No such file or directory
I0116 02:46:34.164859  490202 ssh_runner.go:362] scp /tmp/build.1012046513.tar --> /var/lib/minikube/build/build.1012046513.tar (3072 bytes)
I0116 02:46:34.208071  490202 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1012046513
I0116 02:46:34.216231  490202 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1012046513 -xf /var/lib/minikube/build/build.1012046513.tar
I0116 02:46:34.225140  490202 crio.go:297] Building image: /var/lib/minikube/build/build.1012046513
I0116 02:46:34.225211  490202 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-380867 /var/lib/minikube/build/build.1012046513 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0116 02:46:37.532396  490202 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-380867 /var/lib/minikube/build/build.1012046513 --cgroup-manager=cgroupfs: (3.307161535s)
I0116 02:46:37.532466  490202 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1012046513
I0116 02:46:37.541179  490202 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1012046513.tar
I0116 02:46:37.549154  490202 build_images.go:207] Built localhost/my-image:functional-380867 from /tmp/build.1012046513.tar
I0116 02:46:37.549188  490202 build_images.go:123] succeeded building to: functional-380867
I0116 02:46:37.549193  490202 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.951793675s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-380867
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image load --daemon gcr.io/google-containers/addon-resizer:functional-380867 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-380867 image load --daemon gcr.io/google-containers/addon-resizer:functional-380867 --alsologtostderr: (4.040107423s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "353.524059ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "68.704819ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "423.421472ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "69.513915ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image load --daemon gcr.io/google-containers/addon-resizer:functional-380867 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-380867 image load --daemon gcr.io/google-containers/addon-resizer:functional-380867 --alsologtostderr: (2.786569202s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.938006288s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-380867
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image load --daemon gcr.io/google-containers/addon-resizer:functional-380867 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-380867 image load --daemon gcr.io/google-containers/addon-resizer:functional-380867 --alsologtostderr: (8.455258527s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-380867 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.100.222 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-380867 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image save gcr.io/google-containers/addon-resizer:functional-380867 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image rm gcr.io/google-containers/addon-resizer:functional-380867 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-380867 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.072274559s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-380867
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 image save --daemon gcr.io/google-containers/addon-resizer:functional-380867 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-380867 image save --daemon gcr.io/google-containers/addon-resizer:functional-380867 --alsologtostderr: (1.869403327s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-380867
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-380867 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-380867 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-zqf6r" [10a9ab3c-ca32-4ecb-94e9-f4ab16359998] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-zqf6r" [10a9ab3c-ca32-4ecb-94e9-f4ab16359998] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003792412s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-380867 /tmp/TestFunctionalparallelMountCmdany-port2168861902/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705373179933445131" to /tmp/TestFunctionalparallelMountCmdany-port2168861902/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705373179933445131" to /tmp/TestFunctionalparallelMountCmdany-port2168861902/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705373179933445131" to /tmp/TestFunctionalparallelMountCmdany-port2168861902/001/test-1705373179933445131
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380867 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (306.79059ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 16 02:46 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 16 02:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 16 02:46 test-1705373179933445131
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh cat /mount-9p/test-1705373179933445131
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-380867 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [09e5d64b-dc95-45c4-8af3-052bc1296e99] Pending
helpers_test.go:344: "busybox-mount" [09e5d64b-dc95-45c4-8af3-052bc1296e99] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [09e5d64b-dc95-45c4-8af3-052bc1296e99] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [09e5d64b-dc95-45c4-8af3-052bc1296e99] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004497084s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-380867 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380867 /tmp/TestFunctionalparallelMountCmdany-port2168861902/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-380867 service list: (1.714607125s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-380867 /tmp/TestFunctionalparallelMountCmdspecific-port409529270/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380867 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.945425ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380867 /tmp/TestFunctionalparallelMountCmdspecific-port409529270/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380867 ssh "sudo umount -f /mount-9p": exit status 1 (258.130057ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-380867 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380867 /tmp/TestFunctionalparallelMountCmdspecific-port409529270/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-380867 service list -o json: (1.711030285s)
functional_test.go:1493: Took "1.711142597s" to run "out/minikube-linux-amd64 -p functional-380867 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-380867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2573993356/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-380867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2573993356/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-380867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2573993356/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380867 ssh "findmnt -T" /mount1: exit status 1 (338.778496ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-380867 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2573993356/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2573993356/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2573993356/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31276
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-380867 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31276
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.36s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-380867
--- PASS: TestFunctional/delete_addon-resizer_images (0.36s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-380867
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-380867
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (89.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-570599 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0116 02:47:17.559657  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-570599 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m29.524842396s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (89.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.74s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-570599 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-570599 addons enable ingress --alsologtostderr -v=5: (13.737531364s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.74s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-570599 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-268402 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0116 02:51:34.269455  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:52:15.231070  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-268402 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m7.468414031s)
--- PASS: TestJSONOutput/start/Command (67.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-268402 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-268402 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-268402 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-268402 --output=json --user=testUser: (5.715947265s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-839001 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-839001 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.893844ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e91595c-400d-4c28-b07e-e96ef06d909b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-839001] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"915ee8ea-afe5-46ef-95de-9fcec30277f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17965"}}
	{"specversion":"1.0","id":"3977b886-6f7d-4536-806b-45b75da8e427","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c38a35a2-a0b3-452a-adea-d230e9f2a621","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig"}}
	{"specversion":"1.0","id":"b114168c-7eb8-44be-bb2d-09dd8bdae6d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube"}}
	{"specversion":"1.0","id":"e7c56002-c86b-4041-86df-551373d0bb32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"62cdb361-3972-4330-a2af-f234334d2472","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5154cc12-d43e-4886-b4cc-7d9d745a6a6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-839001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-839001
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-750793 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-750793 --network=: (34.883502909s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-750793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-750793
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-750793: (1.959087002s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.86s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-525985 --network=bridge
E0116 02:53:30.715975  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:30.721311  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:30.731598  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:30.751925  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:30.792234  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:30.872695  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:31.033087  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:31.353886  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:31.995047  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:33.275552  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:35.837542  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:37.152182  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 02:53:40.957836  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 02:53:51.198434  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-525985 --network=bridge: (22.568937864s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-525985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-525985
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-525985: (1.845453734s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.43s)

                                                
                                    
x
+
TestKicExistingNetwork (26.32s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-839243 --network=existing-network
E0116 02:54:11.678843  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-839243 --network=existing-network: (24.284479553s)
helpers_test.go:175: Cleaning up "existing-network-839243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-839243
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-839243: (1.902299671s)
--- PASS: TestKicExistingNetwork (26.32s)

                                                
                                    
x
+
TestKicCustomSubnet (24.4s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-589457 --subnet=192.168.60.0/24
E0116 02:54:33.715225  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-589457 --subnet=192.168.60.0/24: (22.321174209s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-589457 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-589457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-589457
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-589457: (2.060784052s)
--- PASS: TestKicCustomSubnet (24.40s)

                                                
                                    
x
+
TestKicStaticIP (26.7s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-094063 --static-ip=192.168.200.200
E0116 02:54:52.639360  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-094063 --static-ip=192.168.200.200: (24.572604346s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-094063 ip
helpers_test.go:175: Cleaning up "static-ip-094063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-094063
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-094063: (1.984866508s)
--- PASS: TestKicStaticIP (26.70s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (53.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-810407 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-810407 --driver=docker  --container-runtime=crio: (24.264526381s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-813195 --driver=docker  --container-runtime=crio
E0116 02:55:53.307491  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-813195 --driver=docker  --container-runtime=crio: (24.564359417s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-810407
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-813195
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-813195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-813195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-813195: (1.849007354s)
helpers_test.go:175: Cleaning up "first-810407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-810407
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-810407: (2.202848971s)
--- PASS: TestMinikubeProfile (53.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-711143 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-711143 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.98226922s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-711143 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-732065 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0116 02:56:14.560267  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-732065 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.911851528s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732065 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-711143 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-711143 --alsologtostderr -v=5: (1.58373689s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732065 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-732065
E0116 02:56:20.993249  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-732065: (1.177054303s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.72s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-732065
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-732065: (6.718537085s)
--- PASS: TestMountStart/serial/RestartStopped (7.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732065 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (118.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-061156 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-061156 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m57.906784029s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (118.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- rollout status deployment/busybox
E0116 02:58:30.716250  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-061156 -- rollout status deployment/busybox: (3.811434808s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-4dmmg -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-hwz9l -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-4dmmg -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-hwz9l -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-4dmmg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061156 -- exec busybox-5bc68d56bd-hwz9l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.33s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-061156 -v 3 --alsologtostderr
E0116 02:58:58.401489  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-061156 -v 3 --alsologtostderr: (45.205124908s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.82s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-061156 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp testdata/cp-test.txt multinode-061156:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp multinode-061156:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2932865131/001/cp-test_multinode-061156.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp multinode-061156:/home/docker/cp-test.txt multinode-061156-m02:/home/docker/cp-test_multinode-061156_multinode-061156-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m02 "sudo cat /home/docker/cp-test_multinode-061156_multinode-061156-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp multinode-061156:/home/docker/cp-test.txt multinode-061156-m03:/home/docker/cp-test_multinode-061156_multinode-061156-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m03 "sudo cat /home/docker/cp-test_multinode-061156_multinode-061156-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp testdata/cp-test.txt multinode-061156-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp multinode-061156-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2932865131/001/cp-test_multinode-061156-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp multinode-061156-m02:/home/docker/cp-test.txt multinode-061156:/home/docker/cp-test_multinode-061156-m02_multinode-061156.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156 "sudo cat /home/docker/cp-test_multinode-061156-m02_multinode-061156.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp multinode-061156-m02:/home/docker/cp-test.txt multinode-061156-m03:/home/docker/cp-test_multinode-061156-m02_multinode-061156-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m03 "sudo cat /home/docker/cp-test_multinode-061156-m02_multinode-061156-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp testdata/cp-test.txt multinode-061156-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp multinode-061156-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2932865131/001/cp-test_multinode-061156-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp multinode-061156-m03:/home/docker/cp-test.txt multinode-061156:/home/docker/cp-test_multinode-061156-m03_multinode-061156.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156 "sudo cat /home/docker/cp-test_multinode-061156-m03_multinode-061156.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 cp multinode-061156-m03:/home/docker/cp-test.txt multinode-061156-m02:/home/docker/cp-test_multinode-061156-m03_multinode-061156-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m03 "sudo cat /home/docker/cp-test.txt"
E0116 02:59:33.715225  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 ssh -n multinode-061156-m02 "sudo cat /home/docker/cp-test_multinode-061156-m03_multinode-061156-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-061156 node stop m03: (1.18592967s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-061156 status: exit status 7 (479.974609ms)

                                                
                                                
-- stdout --
	multinode-061156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-061156-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-061156-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-061156 status --alsologtostderr: exit status 7 (470.482228ms)

                                                
                                                
-- stdout --
	multinode-061156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-061156-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-061156-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:59:35.907901  549425 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:59:35.908052  549425 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:59:35.908064  549425 out.go:309] Setting ErrFile to fd 2...
	I0116 02:59:35.908072  549425 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:59:35.908360  549425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
	I0116 02:59:35.908589  549425 out.go:303] Setting JSON to false
	I0116 02:59:35.908631  549425 mustload.go:65] Loading cluster: multinode-061156
	I0116 02:59:35.908738  549425 notify.go:220] Checking for updates...
	I0116 02:59:35.909069  549425 config.go:182] Loaded profile config "multinode-061156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:59:35.909087  549425 status.go:255] checking status of multinode-061156 ...
	I0116 02:59:35.909452  549425 cli_runner.go:164] Run: docker container inspect multinode-061156 --format={{.State.Status}}
	I0116 02:59:35.925977  549425 status.go:330] multinode-061156 host status = "Running" (err=<nil>)
	I0116 02:59:35.926031  549425 host.go:66] Checking if "multinode-061156" exists ...
	I0116 02:59:35.926304  549425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-061156
	I0116 02:59:35.942740  549425 host.go:66] Checking if "multinode-061156" exists ...
	I0116 02:59:35.943009  549425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:59:35.943064  549425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156
	I0116 02:59:35.959071  549425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156/id_rsa Username:docker}
	I0116 02:59:36.053597  549425 ssh_runner.go:195] Run: systemctl --version
	I0116 02:59:36.057505  549425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:59:36.067799  549425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:59:36.117732  549425 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2024-01-16 02:59:36.109792048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 02:59:36.118278  549425 kubeconfig.go:92] found "multinode-061156" server: "https://192.168.58.2:8443"
	I0116 02:59:36.118305  549425 api_server.go:166] Checking apiserver status ...
	I0116 02:59:36.118340  549425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:59:36.128312  549425 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1405/cgroup
	I0116 02:59:36.136368  549425 api_server.go:182] apiserver freezer: "10:freezer:/docker/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81/crio/crio-2abdd56261d18b6195c0fdcedc1c83aa1ea002cab03b254dc4bade2a1a9f815c"
	I0116 02:59:36.136432  549425 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1df13cf78442615c8dfdcb1d98e000ff80fa092233fc799b27825520e581bf81/crio/crio-2abdd56261d18b6195c0fdcedc1c83aa1ea002cab03b254dc4bade2a1a9f815c/freezer.state
	I0116 02:59:36.143859  549425 api_server.go:204] freezer state: "THAWED"
	I0116 02:59:36.143910  549425 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0116 02:59:36.148246  549425 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0116 02:59:36.148295  549425 status.go:421] multinode-061156 apiserver status = Running (err=<nil>)
	I0116 02:59:36.148306  549425 status.go:257] multinode-061156 status: &{Name:multinode-061156 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:59:36.148322  549425 status.go:255] checking status of multinode-061156-m02 ...
	I0116 02:59:36.148554  549425 cli_runner.go:164] Run: docker container inspect multinode-061156-m02 --format={{.State.Status}}
	I0116 02:59:36.164842  549425 status.go:330] multinode-061156-m02 host status = "Running" (err=<nil>)
	I0116 02:59:36.164865  549425 host.go:66] Checking if "multinode-061156-m02" exists ...
	I0116 02:59:36.165133  549425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-061156-m02
	I0116 02:59:36.180734  549425 host.go:66] Checking if "multinode-061156-m02" exists ...
	I0116 02:59:36.180989  549425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:59:36.181030  549425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-061156-m02
	I0116 02:59:36.196148  549425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33287 SSHKeyPath:/home/jenkins/minikube-integration/17965-443749/.minikube/machines/multinode-061156-m02/id_rsa Username:docker}
	I0116 02:59:36.289262  549425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:59:36.300070  549425 status.go:257] multinode-061156-m02 status: &{Name:multinode-061156-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:59:36.300116  549425 status.go:255] checking status of multinode-061156-m03 ...
	I0116 02:59:36.300426  549425 cli_runner.go:164] Run: docker container inspect multinode-061156-m03 --format={{.State.Status}}
	I0116 02:59:36.316370  549425 status.go:330] multinode-061156-m03 host status = "Stopped" (err=<nil>)
	I0116 02:59:36.316393  549425 status.go:343] host is not running, skipping remaining checks
	I0116 02:59:36.316399  549425 status.go:257] multinode-061156-m03 status: &{Name:multinode-061156-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-061156 node start m03 --alsologtostderr: (10.459853097s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-061156
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-061156
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-061156: (24.645111703s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-061156 --wait=true -v=8 --alsologtostderr
E0116 03:00:53.306607  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 03:00:56.760967  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-061156 --wait=true -v=8 --alsologtostderr: (1m31.917775145s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-061156
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-061156 node delete m03: (4.076728512s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-061156 stop: (23.465650062s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-061156 status: exit status 7 (100.289627ms)

                                                
                                                
-- stdout --
	multinode-061156
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-061156-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-061156 status --alsologtostderr: exit status 7 (95.495223ms)

                                                
                                                
-- stdout --
	multinode-061156
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-061156-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:02:12.446772  559469 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:02:12.447068  559469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:02:12.447079  559469 out.go:309] Setting ErrFile to fd 2...
	I0116 03:02:12.447086  559469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:02:12.447308  559469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
	I0116 03:02:12.447515  559469 out.go:303] Setting JSON to false
	I0116 03:02:12.447556  559469 mustload.go:65] Loading cluster: multinode-061156
	I0116 03:02:12.447676  559469 notify.go:220] Checking for updates...
	I0116 03:02:12.448096  559469 config.go:182] Loaded profile config "multinode-061156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:02:12.448115  559469 status.go:255] checking status of multinode-061156 ...
	I0116 03:02:12.448652  559469 cli_runner.go:164] Run: docker container inspect multinode-061156 --format={{.State.Status}}
	I0116 03:02:12.465037  559469 status.go:330] multinode-061156 host status = "Stopped" (err=<nil>)
	I0116 03:02:12.465068  559469 status.go:343] host is not running, skipping remaining checks
	I0116 03:02:12.465077  559469 status.go:257] multinode-061156 status: &{Name:multinode-061156 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 03:02:12.465113  559469 status.go:255] checking status of multinode-061156-m02 ...
	I0116 03:02:12.465367  559469 cli_runner.go:164] Run: docker container inspect multinode-061156-m02 --format={{.State.Status}}
	I0116 03:02:12.482896  559469 status.go:330] multinode-061156-m02 host status = "Stopped" (err=<nil>)
	I0116 03:02:12.482951  559469 status.go:343] host is not running, skipping remaining checks
	I0116 03:02:12.482963  559469 status.go:257] multinode-061156-m02 status: &{Name:multinode-061156-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (74.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-061156 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-061156 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m14.325055557s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061156 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (74.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-061156
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-061156-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-061156-m02 --driver=docker  --container-runtime=crio: exit status 14 (86.146414ms)

                                                
                                                
-- stdout --
	* [multinode-061156-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-061156-m02' is duplicated with machine name 'multinode-061156-m02' in profile 'multinode-061156'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-061156-m03 --driver=docker  --container-runtime=crio
E0116 03:03:30.716724  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-061156-m03 --driver=docker  --container-runtime=crio: (23.939556763s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-061156
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-061156: exit status 80 (282.403773ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-061156
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-061156-m03 already exists in multinode-061156-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-061156-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-061156-m03: (1.880409511s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.25s)

                                                
                                    
x
+
TestPreload (173.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-231840 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0116 03:04:33.715493  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-231840 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m24.28540913s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-231840 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-231840 image pull gcr.io/k8s-minikube/busybox: (2.899961731s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-231840
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-231840: (5.670098865s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-231840 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0116 03:05:53.306654  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-231840 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m17.931289631s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-231840 image list
helpers_test.go:175: Cleaning up "test-preload-231840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-231840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-231840: (2.243496415s)
--- PASS: TestPreload (173.25s)

                                                
                                    
x
+
TestScheduledStopUnix (99.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-414205 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-414205 --memory=2048 --driver=docker  --container-runtime=crio: (23.810899202s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-414205 --schedule 5m
E0116 03:07:16.353989  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-414205 -n scheduled-stop-414205
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-414205 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-414205 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-414205 -n scheduled-stop-414205
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-414205
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-414205 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-414205
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-414205: exit status 7 (78.868847ms)

                                                
                                                
-- stdout --
	scheduled-stop-414205
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-414205 -n scheduled-stop-414205
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-414205 -n scheduled-stop-414205: exit status 7 (76.208858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-414205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-414205
E0116 03:08:30.716102  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-414205: (4.595158094s)
--- PASS: TestScheduledStopUnix (99.87s)

                                                
                                    
x
+
TestInsufficientStorage (12.88s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-036661 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-036661 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.521236148s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b2fb902b-7592-4f15-8299-dc5b842635f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-036661] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"66f89e8d-ffe6-4465-8f21-4567946cb2cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17965"}}
	{"specversion":"1.0","id":"e07515d6-4347-4832-a80f-76f3a8b9d837","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"91f64491-5aca-48e3-b5f8-f6de1cc713ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig"}}
	{"specversion":"1.0","id":"fd002248-1d7b-453e-9b4c-9fee5969c57f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube"}}
	{"specversion":"1.0","id":"7e5f574c-e8bf-46d2-b874-84af0c141373","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c24d6b0d-fead-487f-a7e5-fd3ad4a4172d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"50cb0119-d8d4-4325-b476-5cff853ed6a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f97a4706-5de3-4fee-8035-67a0e15acdee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b2e6d87b-bce5-4946-ae8c-75013251dbb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8b2f2f4-0fcb-4a81-96ea-ae4fcb77873a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6935955c-c4d1-45b8-9854-03288b36d425","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-036661 in cluster insufficient-storage-036661","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b406b15f-e592-4c0b-898e-dd99ee4430e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"63b8c47e-055d-4848-82f3-66498da1eb77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"37bc836b-7948-4882-ad10-0ecade4249b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-036661 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-036661 --output=json --layout=cluster: exit status 7 (270.90138ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-036661","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-036661","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:08:43.165159  580626 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-036661" does not appear in /home/jenkins/minikube-integration/17965-443749/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-036661 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-036661 --output=json --layout=cluster: exit status 7 (271.686425ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-036661","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-036661","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:08:43.437894  580717 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-036661" does not appear in /home/jenkins/minikube-integration/17965-443749/kubeconfig
	E0116 03:08:43.447227  580717 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/insufficient-storage-036661/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-036661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-036661
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-036661: (1.813412s)
--- PASS: TestInsufficientStorage (12.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.29s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3444147794 start -p running-upgrade-495735 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3444147794 start -p running-upgrade-495735 --memory=2200 --vm-driver=docker  --container-runtime=crio: (28.413311111s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-495735 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-495735 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.746765726s)
helpers_test.go:175: Cleaning up "running-upgrade-495735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-495735
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-495735: (5.681536427s)
--- PASS: TestRunningBinaryUpgrade (66.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (357.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-000053 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0116 03:09:53.761670  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-000053 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.531616088s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-000053
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-000053: (1.205066276s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-000053 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-000053 status --format={{.Host}}: exit status 7 (82.107574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-000053 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-000053 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.352813739s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-000053 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-000053 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-000053 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (83.822875ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-000053] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-000053
	    minikube start -p kubernetes-upgrade-000053 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0000532 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-000053 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-000053 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-000053 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.049390827s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-000053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-000053
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-000053: (3.056444287s)
--- PASS: TestKubernetesUpgrade (357.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (169.86s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3853773169 start -p missing-upgrade-803515 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3853773169 start -p missing-upgrade-803515 --memory=2200 --driver=docker  --container-runtime=crio: (1m37.972760447s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-803515
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-803515: (14.814387553s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-803515
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-803515 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-803515 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.663654474s)
helpers_test.go:175: Cleaning up "missing-upgrade-803515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-803515
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-803515: (1.888582898s)
--- PASS: TestMissingContainerUpgrade (169.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-505008 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-505008 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (90.299307ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-505008] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-505008 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-505008 --driver=docker  --container-runtime=crio: (29.169330448s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-505008 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (124.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4077365941 start -p stopped-upgrade-518684 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4077365941 start -p stopped-upgrade-518684 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m38.080947439s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4077365941 -p stopped-upgrade-518684 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4077365941 -p stopped-upgrade-518684 stop: (2.450576998s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-518684 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-518684 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.56066191s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (124.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-505008 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-505008 --no-kubernetes --driver=docker  --container-runtime=crio: (9.995074359s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-505008 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-505008 status -o json: exit status 2 (405.483151ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-505008","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-505008
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-505008: (2.325878108s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-505008 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-505008 --no-kubernetes --driver=docker  --container-runtime=crio: (5.63794603s)
--- PASS: TestNoKubernetes/serial/Start (5.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-505008 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-505008 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.984602ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E0116 03:09:33.715222  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-505008
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-505008: (1.189846376s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-505008 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-505008 --driver=docker  --container-runtime=crio: (7.042298235s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-505008 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-505008 "sudo systemctl is-active --quiet service kubelet": exit status 1 (260.390263ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestPause/serial/Start (45.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-399690 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-399690 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (45.605256585s)
--- PASS: TestPause/serial/Start (45.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.31s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-399690 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-399690 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.27463333s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-518684
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-399690 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-399690 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-399690 --output=json --layout=cluster: exit status 2 (354.783994ms)

                                                
                                                
-- stdout --
	{"Name":"pause-399690","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-399690","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-399690 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.93s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-399690 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-399690 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-399690 --alsologtostderr -v=5: (4.891066781s)
--- PASS: TestPause/serial/DeletePaused (4.89s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.05s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.897266105s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-399690
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-399690: exit status 1 (15.520499ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-399690: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-126430 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-126430 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (152.895521ms)

                                                
                                                
-- stdout --
	* [false-126430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:11:31.081414  616416 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:11:31.081571  616416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:11:31.081586  616416 out.go:309] Setting ErrFile to fd 2...
	I0116 03:11:31.081593  616416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:11:31.081756  616416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-443749/.minikube/bin
	I0116 03:11:31.082329  616416 out.go:303] Setting JSON to false
	I0116 03:11:31.083472  616416 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10437,"bootTime":1705364254,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:11:31.083538  616416 start.go:138] virtualization: kvm guest
	I0116 03:11:31.085818  616416 out.go:177] * [false-126430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:11:31.087217  616416 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 03:11:31.087223  616416 notify.go:220] Checking for updates...
	I0116 03:11:31.088521  616416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:11:31.089960  616416 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-443749/kubeconfig
	I0116 03:11:31.091216  616416 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-443749/.minikube
	I0116 03:11:31.092407  616416 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:11:31.093633  616416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:11:31.095257  616416 config.go:182] Loaded profile config "kubernetes-upgrade-000053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:11:31.095360  616416 config.go:182] Loaded profile config "missing-upgrade-803515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0116 03:11:31.095449  616416 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:11:31.116619  616416 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:11:31.116734  616416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:11:31.165956  616416 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:64 SystemTime:2024-01-16 03:11:31.157978755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0116 03:11:31.166050  616416 docker.go:295] overlay module found
	I0116 03:11:31.167846  616416 out.go:177] * Using the docker driver based on user configuration
	I0116 03:11:31.169053  616416 start.go:298] selected driver: docker
	I0116 03:11:31.169074  616416 start.go:902] validating driver "docker" against <nil>
	I0116 03:11:31.169090  616416 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:11:31.171540  616416 out.go:177] 
	W0116 03:11:31.172769  616416 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0116 03:11:31.174047  616416 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-126430 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-126430" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-126430" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:10:55 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-000053
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:11:32 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-803515
contexts:
- context:
cluster: kubernetes-upgrade-000053
user: kubernetes-upgrade-000053
name: kubernetes-upgrade-000053
- context:
cluster: missing-upgrade-803515
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:11:32 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-803515
name: missing-upgrade-803515
current-context: missing-upgrade-803515
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-000053
user:
client-certificate: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/kubernetes-upgrade-000053/client.crt
client-key: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/kubernetes-upgrade-000053/client.key
- name: missing-upgrade-803515
user:
client-certificate: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/missing-upgrade-803515/client.crt
client-key: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/missing-upgrade-803515/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-126430

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126430"

                                                
                                                
----------------------- debugLogs end: false-126430 [took: 3.381683958s] --------------------------------
helpers_test.go:175: Cleaning up "false-126430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-126430
--- PASS: TestNetworkPlugins/group/false (3.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (111.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-546785 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-546785 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m51.956706222s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (111.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-727141 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 03:13:30.716011  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-727141 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m9.053470506s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-546785 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e6b4fded-b674-483e-83ec-a38b4259840d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e6b4fded-b674-483e-83ec-a38b4259840d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003043789s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-546785 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-546785 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-546785 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-546785 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-546785 --alsologtostderr -v=3: (11.829769092s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-546785 -n old-k8s-version-546785
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-546785 -n old-k8s-version-546785: exit status 7 (92.717371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-546785 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (411.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-546785 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-546785 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (6m50.735282831s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-546785 -n old-k8s-version-546785
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (411.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-727141 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6e5abf40-b09d-4fae-a171-adc538518692] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6e5abf40-b09d-4fae-a171-adc538518692] Running
E0116 03:14:33.714970  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003845535s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-727141 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-727141 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-727141 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-727141 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-727141 --alsologtostderr -v=3: (11.955006709s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727141 -n embed-certs-727141
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727141 -n embed-certs-727141: exit status 7 (82.965162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-727141 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (337.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-727141 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-727141 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m36.70818113s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727141 -n embed-certs-727141
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (337.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-626735 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-626735 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (57.756184042s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-265807 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 03:15:53.306588  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-265807 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m11.593756216s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-626735 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b3a7a7f5-900d-4181-a0aa-019ae1a070df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b3a7a7f5-900d-4181-a0aa-019ae1a070df] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00394091s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-626735 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-626735 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-626735 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-626735 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-626735 --alsologtostderr -v=3: (11.813966921s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-626735 -n no-preload-626735
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-626735 -n no-preload-626735: exit status 7 (84.538478ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-626735 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (594.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-626735 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-626735 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (9m54.275760889s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-626735 -n no-preload-626735
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (594.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-265807 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [52410178-39be-457e-b012-2a3a40842e3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [52410178-39be-457e-b012-2a3a40842e3c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004077952s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-265807 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-265807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-265807 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-265807 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-265807 --alsologtostderr -v=3: (11.854679281s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-265807 -n default-k8s-diff-port-265807
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-265807 -n default-k8s-diff-port-265807: exit status 7 (86.521774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-265807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-265807 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 03:17:36.761797  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
E0116 03:18:30.716309  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory
E0116 03:19:33.715024  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/addons-411655/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-265807 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m37.874927578s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-265807 -n default-k8s-diff-port-265807
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jn7c9" [eeacef9b-e302-4aaa-a53b-1860ea96cd5a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jn7c9" [eeacef9b-e302-4aaa-a53b-1860ea96cd5a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.003480629s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jn7c9" [eeacef9b-e302-4aaa-a53b-1860ea96cd5a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00372716s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-727141 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-727141 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-727141 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-727141 -n embed-certs-727141
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-727141 -n embed-certs-727141: exit status 2 (309.078797ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-727141 -n embed-certs-727141
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-727141 -n embed-certs-727141: exit status 2 (316.260445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-727141 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-727141 -n embed-certs-727141
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-727141 -n embed-certs-727141
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-870756 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0116 03:20:53.307164  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-870756 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (34.016656658s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-tgk8f" [54960c1f-5b6d-47a0-a61d-86563253c66f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003491844s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-tgk8f" [54960c1f-5b6d-47a0-a61d-86563253c66f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003568612s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-546785 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-870756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-870756 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-870756 --alsologtostderr -v=3: (1.205180541s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-870756 -n newest-cni-870756
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-870756 -n newest-cni-870756: exit status 7 (80.984213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-870756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-870756 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-870756 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (25.901025024s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-870756 -n newest-cni-870756
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-546785 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-546785 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-546785 -n old-k8s-version-546785
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-546785 -n old-k8s-version-546785: exit status 2 (308.980891ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-546785 -n old-k8s-version-546785
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-546785 -n old-k8s-version-546785: exit status 2 (315.354819ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-546785 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-546785 -n old-k8s-version-546785
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-546785 -n old-k8s-version-546785
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m10.730656002s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-870756 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-870756 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-870756 -n newest-cni-870756
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-870756 -n newest-cni-870756: exit status 2 (346.87415ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-870756 -n newest-cni-870756
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-870756 -n newest-cni-870756: exit status 2 (364.228546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-870756 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-870756 -n newest-cni-870756
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-870756 -n newest-cni-870756
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m10.888663298s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-126430 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-126430 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7mkhf" [90d52316-226a-4c22-b60b-b8665befe2ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7mkhf" [90d52316-226a-4c22-b60b-b8665befe2ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004296756s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-126430 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f9k4" [8e7682b6-e603-4264-a9c5-12e88e168280] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f9k4" [8e7682b6-e603-4264-a9c5-12e88e168280] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004130095s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f9k4" [8e7682b6-e603-4264-a9c5-12e88e168280] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00422763s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-265807 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rzjmk" [a33f1b44-df8f-4859-a354-c74e95717d73] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00458373s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-265807 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-265807 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-265807 -n default-k8s-diff-port-265807
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-265807 -n default-k8s-diff-port-265807: exit status 2 (326.782747ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-265807 -n default-k8s-diff-port-265807
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-265807 -n default-k8s-diff-port-265807: exit status 2 (335.990546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-265807 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-265807 -n default-k8s-diff-port-265807
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-265807 -n default-k8s-diff-port-265807
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m10.781134725s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-126430 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-126430 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zlhzv" [9b3059ae-f719-43e6-8205-90049fb49857] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zlhzv" [9b3059ae-f719-43e6-8205-90049fb49857] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004685193s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m2.34852507s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-126430 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0116 03:23:56.354788  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/functional-380867/client.crt: no such file or directory
E0116 03:24:04.322867  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
E0116 03:24:04.328167  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
E0116 03:24:04.338446  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
E0116 03:24:04.358740  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
E0116 03:24:04.399038  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
E0116 03:24:04.480054  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
E0116 03:24:04.640509  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
E0116 03:24:04.961213  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
E0116 03:24:05.602249  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
E0116 03:24:06.882798  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
E0116 03:24:09.443825  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
E0116 03:24:14.564219  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m11.004123318s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-126430 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-126430 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gcgn8" [dde82762-d6f5-40dd-b901-9bc14f35cd7e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gcgn8" [dde82762-d6f5-40dd-b901-9bc14f35cd7e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003953572s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qgd5d" [726ba58d-1fa6-4690-847c-4d986446c602] Running
E0116 03:24:24.804674  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00433207s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-126430 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-126430 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hz5tp" [aa647e11-1a29-4064-8922-124b12f04edf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hz5tp" [aa647e11-1a29-4064-8922-124b12f04edf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00348295s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-126430 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-126430 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.855297608s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-126430 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-126430 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jff8m" [e6080815-52d1-4303-8c34-101cd29d1323] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jff8m" [e6080815-52d1-4303-8c34-101cd29d1323] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004126678s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-126430 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (38.798734503s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-126430 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-126430 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-126430 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sfwx2" [ed568c96-3356-4d30-913d-7a335c96b020] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sfwx2" [ed568c96-3356-4d30-913d-7a335c96b020] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003545135s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-126430 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hmwr6" [6fe3cd16-2354-4790-a0c1-8bbe542ada38] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003865857s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-126430 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-126430 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h224m" [1a2a6eae-7a02-4d29-aeba-3fad6906ec93] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h224m" [1a2a6eae-7a02-4d29-aeba-3fad6906ec93] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003925794s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-126430 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-126430 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)
E0116 03:26:33.762421  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/ingress-addon-legacy-570599/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x82s7" [b37b0c21-dd61-47db-be3b-0d35542df802] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003671356s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x82s7" [b37b0c21-dd61-47db-be3b-0d35542df802] Running
E0116 03:26:48.167232  450573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/old-k8s-version-546785/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004257587s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-626735 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-626735 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-626735 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-626735 -n no-preload-626735
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-626735 -n no-preload-626735: exit status 2 (296.41528ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-626735 -n no-preload-626735
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-626735 -n no-preload-626735: exit status 2 (293.833281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-626735 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-626735 -n no-preload-626735
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-626735 -n no-preload-626735
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.66s)

                                                
                                    

Test skip (27/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-023275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-023275
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-126430 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-126430" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-126430" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:10:55 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-000053
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:10:24 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-803515
contexts:
- context:
cluster: kubernetes-upgrade-000053
user: kubernetes-upgrade-000053
name: kubernetes-upgrade-000053
- context:
cluster: missing-upgrade-803515
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:10:24 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-803515
name: missing-upgrade-803515
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-000053
user:
client-certificate: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/kubernetes-upgrade-000053/client.crt
client-key: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/kubernetes-upgrade-000053/client.key
- name: missing-upgrade-803515
user:
client-certificate: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/missing-upgrade-803515/client.crt
client-key: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/missing-upgrade-803515/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-126430

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126430"

                                                
                                                
----------------------- debugLogs end: kubenet-126430 [took: 3.49108104s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-126430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-126430
--- SKIP: TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-126430 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-126430" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-443749/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 03:10:55 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-000053
contexts:
- context:
cluster: kubernetes-upgrade-000053
user: kubernetes-upgrade-000053
name: kubernetes-upgrade-000053
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-000053
user:
client-certificate: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/kubernetes-upgrade-000053/client.crt
client-key: /home/jenkins/minikube-integration/17965-443749/.minikube/profiles/kubernetes-upgrade-000053/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-126430

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-126430" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126430"

                                                
                                                
----------------------- debugLogs end: cilium-126430 [took: 3.538712329s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-126430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-126430
--- SKIP: TestNetworkPlugins/group/cilium (3.69s)

                                                
                                    
Copied to clipboard