Test Report: Docker_Linux_crio 17830

                    
                      f2d99d5d3acbee63fb92e6e0c0b75bbff35f3ad4:2024-01-08:32615
                    
                

Test fail (6/316)

Order failed test Duration
35 TestAddons/parallel/Ingress 152.62
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 9.88
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 176.64
217 TestMultiNode/serial/PingHostFrom2Pods 3.47
239 TestRunningBinaryUpgrade 61.91
254 TestStoppedBinaryUpgrade/Upgrade 83.79
x
+
TestAddons/parallel/Ingress (152.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-608450 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-608450 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-608450 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f4dfca7e-b150-4c9d-9521-d009c9b8c019] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f4dfca7e-b150-4c9d-9521-d009c9b8c019] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004375s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-608450 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.484843522s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-608450 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-608450 addons disable ingress-dns --alsologtostderr -v=1: (1.402678086s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-608450 addons disable ingress --alsologtostderr -v=1: (7.639724708s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-608450
helpers_test.go:235: (dbg) docker inspect addons-608450:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7910492989e553469ed5faf589502209db570b7f1bfa70d4a42c2985db3bb093",
	        "Created": "2024-01-08T22:52:26.395331371Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329998,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T22:52:26.657698655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a68510052ff42a82cad4cbbd1f236376dac91176d14d2a924a5e2b18f7ff0a23",
	        "ResolvConfPath": "/var/lib/docker/containers/7910492989e553469ed5faf589502209db570b7f1bfa70d4a42c2985db3bb093/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7910492989e553469ed5faf589502209db570b7f1bfa70d4a42c2985db3bb093/hostname",
	        "HostsPath": "/var/lib/docker/containers/7910492989e553469ed5faf589502209db570b7f1bfa70d4a42c2985db3bb093/hosts",
	        "LogPath": "/var/lib/docker/containers/7910492989e553469ed5faf589502209db570b7f1bfa70d4a42c2985db3bb093/7910492989e553469ed5faf589502209db570b7f1bfa70d4a42c2985db3bb093-json.log",
	        "Name": "/addons-608450",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-608450:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-608450",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/57f932ba6a2bf68d1e3a2babe776383ddbc62ff11d62410a6fda6c5a03d81f8c-init/diff:/var/lib/docker/overlay2/5d41a77db4225bbdb2799c0759ad4432ee2e97ed824f853dc9d7fa3db67a2cbc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/57f932ba6a2bf68d1e3a2babe776383ddbc62ff11d62410a6fda6c5a03d81f8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/57f932ba6a2bf68d1e3a2babe776383ddbc62ff11d62410a6fda6c5a03d81f8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/57f932ba6a2bf68d1e3a2babe776383ddbc62ff11d62410a6fda6c5a03d81f8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-608450",
	                "Source": "/var/lib/docker/volumes/addons-608450/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-608450",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-608450",
	                "name.minikube.sigs.k8s.io": "addons-608450",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "220ef254ba0d323889468a3570792b1f6f39dea42bdd28c9a57c22fd68267894",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/220ef254ba0d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-608450": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7910492989e5",
	                        "addons-608450"
	                    ],
	                    "NetworkID": "c91fe0b52c80b94ded416c6e26b92702ee5e5fdfffe9f0313c85c3b51a348e09",
	                    "EndpointID": "cd8797b1f9048c6fb9c2ae2bf98923575787445d4fff0abd41c61ae641592679",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-608450 -n addons-608450
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-608450 logs -n 25: (1.187472595s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-926847                                                                     | download-only-926847   | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| delete  | -p download-only-926847                                                                     | download-only-926847   | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| start   | --download-only -p                                                                          | download-docker-817040 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | download-docker-817040                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-817040                                                                   | download-docker-817040 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-114815   | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | binary-mirror-114815                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33677                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-114815                                                                     | binary-mirror-114815   | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| addons  | disable dashboard -p                                                                        | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | addons-608450                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | addons-608450                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-608450 --wait=true                                                                | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:54 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | addons-608450                                                                               |                        |         |         |                     |                     |
	| addons  | addons-608450 addons disable                                                                | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-608450 ip                                                                            | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	| addons  | addons-608450 addons disable                                                                | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-608450 addons                                                                        | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | -p addons-608450                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-608450 ssh curl -s                                                                   | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | -p addons-608450                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | addons-608450                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-608450 ssh cat                                                                       | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | /opt/local-path-provisioner/pvc-3ba3a57c-5f41-4761-a694-297c1dadc482_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-608450 addons disable                                                                | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:55 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-608450 addons                                                                        | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:55 UTC | 08 Jan 24 22:55 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-608450 addons                                                                        | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:55 UTC | 08 Jan 24 22:55 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-608450 ip                                                                            | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	| addons  | addons-608450 addons disable                                                                | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-608450 addons disable                                                                | addons-608450          | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:52:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:52:02.660667  329388 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:52:02.660889  329388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:52:02.660897  329388 out.go:309] Setting ErrFile to fd 2...
	I0108 22:52:02.660901  329388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:52:02.661066  329388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	I0108 22:52:02.661688  329388 out.go:303] Setting JSON to false
	I0108 22:52:02.662568  329388 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12855,"bootTime":1704741468,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:52:02.662628  329388 start.go:138] virtualization: kvm guest
	I0108 22:52:02.664968  329388 out.go:177] * [addons-608450] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:52:02.666258  329388 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 22:52:02.666270  329388 notify.go:220] Checking for updates...
	I0108 22:52:02.667599  329388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:52:02.669189  329388 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 22:52:02.670680  329388 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	I0108 22:52:02.671984  329388 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:52:02.673349  329388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:52:02.674744  329388 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:52:02.695396  329388 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 22:52:02.695552  329388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:52:02.751254  329388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 22:52:02.743364665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 22:52:02.751375  329388 docker.go:295] overlay module found
	I0108 22:52:02.753337  329388 out.go:177] * Using the docker driver based on user configuration
	I0108 22:52:02.754506  329388 start.go:298] selected driver: docker
	I0108 22:52:02.754526  329388 start.go:902] validating driver "docker" against <nil>
	I0108 22:52:02.754542  329388 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:52:02.755303  329388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:52:02.806516  329388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 22:52:02.798199825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 22:52:02.806722  329388 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 22:52:02.806955  329388 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:52:02.808862  329388 out.go:177] * Using Docker driver with root privileges
	I0108 22:52:02.810136  329388 cni.go:84] Creating CNI manager for ""
	I0108 22:52:02.810164  329388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:52:02.810178  329388 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 22:52:02.810194  329388 start_flags.go:323] config:
	{Name:addons-608450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-608450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:52:02.811906  329388 out.go:177] * Starting control plane node addons-608450 in cluster addons-608450
	I0108 22:52:02.813382  329388 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 22:52:02.814908  329388 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0108 22:52:02.816251  329388 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:52:02.816277  329388 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0108 22:52:02.816287  329388 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 22:52:02.816296  329388 cache.go:56] Caching tarball of preloaded images
	I0108 22:52:02.816370  329388 preload.go:174] Found /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 22:52:02.816381  329388 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:52:02.816759  329388 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/config.json ...
	I0108 22:52:02.816781  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/config.json: {Name:mkd802f3ab5010e0eedfc2d2cdd90262898c1f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:02.831722  329388 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 to local cache
	I0108 22:52:02.831863  329388 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory
	I0108 22:52:02.831890  329388 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory, skipping pull
	I0108 22:52:02.831896  329388 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in cache, skipping pull
	I0108 22:52:02.831904  329388 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 as a tarball
	I0108 22:52:02.831909  329388 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 from local cache
	I0108 22:52:13.797059  329388 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 from cached tarball
	I0108 22:52:13.797100  329388 cache.go:194] Successfully downloaded all kic artifacts
	I0108 22:52:13.797173  329388 start.go:365] acquiring machines lock for addons-608450: {Name:mkf10781ad62f07840e47f073aacc72a75f3d4da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:52:13.797275  329388 start.go:369] acquired machines lock for "addons-608450" in 80.931µs
	I0108 22:52:13.797300  329388 start.go:93] Provisioning new machine with config: &{Name:addons-608450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-608450 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:52:13.797378  329388 start.go:125] createHost starting for "" (driver="docker")
	I0108 22:52:13.799599  329388 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0108 22:52:13.799876  329388 start.go:159] libmachine.API.Create for "addons-608450" (driver="docker")
	I0108 22:52:13.799906  329388 client.go:168] LocalClient.Create starting
	I0108 22:52:13.800071  329388 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem
	I0108 22:52:13.883189  329388 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem
	I0108 22:52:14.083624  329388 cli_runner.go:164] Run: docker network inspect addons-608450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 22:52:14.099497  329388 cli_runner.go:211] docker network inspect addons-608450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 22:52:14.099571  329388 network_create.go:281] running [docker network inspect addons-608450] to gather additional debugging logs...
	I0108 22:52:14.099591  329388 cli_runner.go:164] Run: docker network inspect addons-608450
	W0108 22:52:14.115139  329388 cli_runner.go:211] docker network inspect addons-608450 returned with exit code 1
	I0108 22:52:14.115170  329388 network_create.go:284] error running [docker network inspect addons-608450]: docker network inspect addons-608450: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-608450 not found
	I0108 22:52:14.115181  329388 network_create.go:286] output of [docker network inspect addons-608450]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-608450 not found
	
	** /stderr **
	I0108 22:52:14.115371  329388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 22:52:14.131713  329388 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020aebf0}
	I0108 22:52:14.131761  329388 network_create.go:124] attempt to create docker network addons-608450 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 22:52:14.131804  329388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-608450 addons-608450
	I0108 22:52:14.183980  329388 network_create.go:108] docker network addons-608450 192.168.49.0/24 created
	I0108 22:52:14.184059  329388 kic.go:121] calculated static IP "192.168.49.2" for the "addons-608450" container
	I0108 22:52:14.184135  329388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 22:52:14.199974  329388 cli_runner.go:164] Run: docker volume create addons-608450 --label name.minikube.sigs.k8s.io=addons-608450 --label created_by.minikube.sigs.k8s.io=true
	I0108 22:52:14.217275  329388 oci.go:103] Successfully created a docker volume addons-608450
	I0108 22:52:14.217372  329388 cli_runner.go:164] Run: docker run --rm --name addons-608450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-608450 --entrypoint /usr/bin/test -v addons-608450:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib
	I0108 22:52:21.173251  329388 cli_runner.go:217] Completed: docker run --rm --name addons-608450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-608450 --entrypoint /usr/bin/test -v addons-608450:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib: (6.955834699s)
	I0108 22:52:21.173285  329388 oci.go:107] Successfully prepared a docker volume addons-608450
	I0108 22:52:21.173329  329388 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:52:21.173356  329388 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 22:52:21.173429  329388 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-608450:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 22:52:26.329689  329388 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-608450:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir: (5.156191227s)
	I0108 22:52:26.329729  329388 kic.go:203] duration metric: took 5.156369 seconds to extract preloaded images to volume
	W0108 22:52:26.329882  329388 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 22:52:26.330015  329388 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 22:52:26.380752  329388 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-608450 --name addons-608450 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-608450 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-608450 --network addons-608450 --ip 192.168.49.2 --volume addons-608450:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0108 22:52:26.665757  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Running}}
	I0108 22:52:26.684377  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:26.701386  329388 cli_runner.go:164] Run: docker exec addons-608450 stat /var/lib/dpkg/alternatives/iptables
	I0108 22:52:26.741028  329388 oci.go:144] the created container "addons-608450" has a running status.
	I0108 22:52:26.741063  329388 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa...
	I0108 22:52:27.021429  329388 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 22:52:27.040150  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:27.063488  329388 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 22:52:27.063513  329388 kic_runner.go:114] Args: [docker exec --privileged addons-608450 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 22:52:27.153586  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:27.173504  329388 machine.go:88] provisioning docker machine ...
	I0108 22:52:27.173568  329388 ubuntu.go:169] provisioning hostname "addons-608450"
	I0108 22:52:27.173632  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:27.191710  329388 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:27.192080  329388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I0108 22:52:27.192096  329388 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-608450 && echo "addons-608450" | sudo tee /etc/hostname
	I0108 22:52:27.378589  329388 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-608450
	
	I0108 22:52:27.378681  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:27.396359  329388 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:27.396706  329388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I0108 22:52:27.396724  329388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-608450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-608450/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-608450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:52:27.531370  329388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:52:27.531405  329388 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-321683/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-321683/.minikube}
	I0108 22:52:27.531449  329388 ubuntu.go:177] setting up certificates
	I0108 22:52:27.531462  329388 provision.go:83] configureAuth start
	I0108 22:52:27.531521  329388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-608450
	I0108 22:52:27.547163  329388 provision.go:138] copyHostCerts
	I0108 22:52:27.547230  329388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem (1082 bytes)
	I0108 22:52:27.547379  329388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem (1123 bytes)
	I0108 22:52:27.547487  329388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem (1679 bytes)
	I0108 22:52:27.547554  329388 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem org=jenkins.addons-608450 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-608450]
	I0108 22:52:27.693697  329388 provision.go:172] copyRemoteCerts
	I0108 22:52:27.693769  329388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:52:27.693855  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:27.710864  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:27.808163  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:52:27.830905  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 22:52:27.853656  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 22:52:27.876585  329388 provision.go:86] duration metric: configureAuth took 345.109009ms
	I0108 22:52:27.876614  329388 ubuntu.go:193] setting minikube options for container-runtime
	I0108 22:52:27.876857  329388 config.go:182] Loaded profile config "addons-608450": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:52:27.876991  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:27.893661  329388 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:27.894025  329388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I0108 22:52:27.894043  329388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:52:28.124659  329388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:52:28.124691  329388 machine.go:91] provisioned docker machine in 951.164303ms
	I0108 22:52:28.124703  329388 client.go:171] LocalClient.Create took 14.324788713s
	I0108 22:52:28.124725  329388 start.go:167] duration metric: libmachine.API.Create for "addons-608450" took 14.324849455s
	I0108 22:52:28.124736  329388 start.go:300] post-start starting for "addons-608450" (driver="docker")
	I0108 22:52:28.124751  329388 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:52:28.124814  329388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:52:28.124871  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:28.141248  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:28.236056  329388 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:52:28.239144  329388 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 22:52:28.239174  329388 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 22:52:28.239183  329388 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 22:52:28.239189  329388 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 22:52:28.239200  329388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/addons for local assets ...
	I0108 22:52:28.239248  329388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/files for local assets ...
	I0108 22:52:28.239299  329388 start.go:303] post-start completed in 114.555547ms
	I0108 22:52:28.239609  329388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-608450
	I0108 22:52:28.257336  329388 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/config.json ...
	I0108 22:52:28.257614  329388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 22:52:28.257662  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:28.274684  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:28.368317  329388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 22:52:28.372557  329388 start.go:128] duration metric: createHost completed in 14.575163954s
	I0108 22:52:28.372653  329388 start.go:83] releasing machines lock for "addons-608450", held for 14.575358253s
	I0108 22:52:28.372734  329388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-608450
	I0108 22:52:28.388644  329388 ssh_runner.go:195] Run: cat /version.json
	I0108 22:52:28.388714  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:28.388749  329388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:52:28.388807  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:28.405154  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:28.406715  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:28.498954  329388 ssh_runner.go:195] Run: systemctl --version
	I0108 22:52:28.591914  329388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:52:28.728346  329388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 22:52:28.732522  329388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:52:28.749962  329388 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 22:52:28.750041  329388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:52:28.775885  329388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 22:52:28.775908  329388 start.go:475] detecting cgroup driver to use...
	I0108 22:52:28.775938  329388 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 22:52:28.776023  329388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:52:28.789591  329388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:52:28.799937  329388 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:52:28.800031  329388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:52:28.813526  329388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:52:28.826233  329388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:52:28.905826  329388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:52:28.985040  329388 docker.go:219] disabling docker service ...
	I0108 22:52:28.985113  329388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:52:29.002733  329388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:52:29.013252  329388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:52:29.085030  329388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:52:29.163050  329388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:52:29.173370  329388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:52:29.188810  329388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:52:29.188862  329388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:29.197830  329388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:52:29.197905  329388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:29.206753  329388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:29.215521  329388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:29.224104  329388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:52:29.232053  329388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:52:29.239380  329388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:52:29.246864  329388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:52:29.324218  329388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:52:29.420470  329388 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:52:29.420684  329388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:52:29.425745  329388 start.go:543] Will wait 60s for crictl version
	I0108 22:52:29.425790  329388 ssh_runner.go:195] Run: which crictl
	I0108 22:52:29.428938  329388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:52:29.463480  329388 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 22:52:29.463566  329388 ssh_runner.go:195] Run: crio --version
	I0108 22:52:29.498173  329388 ssh_runner.go:195] Run: crio --version
	I0108 22:52:29.533306  329388 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 22:52:29.534821  329388 cli_runner.go:164] Run: docker network inspect addons-608450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 22:52:29.550522  329388 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 22:52:29.553987  329388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:52:29.564126  329388 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:52:29.564175  329388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:52:29.618992  329388 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:52:29.619016  329388 crio.go:415] Images already preloaded, skipping extraction
	I0108 22:52:29.619105  329388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:52:29.651777  329388 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:52:29.651801  329388 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:52:29.651874  329388 ssh_runner.go:195] Run: crio config
	I0108 22:52:29.692872  329388 cni.go:84] Creating CNI manager for ""
	I0108 22:52:29.692893  329388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:52:29.692913  329388 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:52:29.692958  329388 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-608450 NodeName:addons-608450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:52:29.693090  329388 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-608450"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:52:29.693180  329388 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-608450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-608450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:52:29.693227  329388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:52:29.701576  329388 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:52:29.701649  329388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:52:29.709788  329388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0108 22:52:29.726134  329388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:52:29.742402  329388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0108 22:52:29.758348  329388 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 22:52:29.761754  329388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:52:29.772200  329388 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450 for IP: 192.168.49.2
	I0108 22:52:29.772251  329388 certs.go:190] acquiring lock for shared ca certs: {Name:mka0fb25b2b3d7c6ea0a3bf3a5eb1e0289391c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:29.772410  329388 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key
	I0108 22:52:29.912026  329388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt ...
	I0108 22:52:29.912057  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt: {Name:mke4b04684130519cebbc248868702fad01df692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:29.912274  329388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key ...
	I0108 22:52:29.912291  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key: {Name:mk737b721fafc5a6108e384bef7a79dd02b04388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:29.912388  329388 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key
	I0108 22:52:30.193846  329388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.crt ...
	I0108 22:52:30.193881  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.crt: {Name:mk3e704294c25e739c6368b2e678efe6aaaeedfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:30.194061  329388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key ...
	I0108 22:52:30.194082  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key: {Name:mkdfbda3d7100a54eedb0dcb000357ddf220d06b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:30.194216  329388 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.key
	I0108 22:52:30.194235  329388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt with IP's: []
	I0108 22:52:30.371090  329388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt ...
	I0108 22:52:30.371127  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: {Name:mkb437d09334614432620cf520577c52ce2704ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:30.371338  329388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.key ...
	I0108 22:52:30.371355  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.key: {Name:mk6b72b315809cdf987d6f27dbc0ae7fd67e38a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:30.371452  329388 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.key.dd3b5fb2
	I0108 22:52:30.371476  329388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 22:52:30.660102  329388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.crt.dd3b5fb2 ...
	I0108 22:52:30.660145  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.crt.dd3b5fb2: {Name:mk3da351c7022bc6becce639be94c009f5b02691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:30.660347  329388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.key.dd3b5fb2 ...
	I0108 22:52:30.660367  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.key.dd3b5fb2: {Name:mk7afa5d585fa8cc91ba5e2965c67f2bd17f7923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:30.660469  329388 certs.go:337] copying /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.crt
	I0108 22:52:30.660575  329388 certs.go:341] copying /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.key
	I0108 22:52:30.660633  329388 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/proxy-client.key
	I0108 22:52:30.660656  329388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/proxy-client.crt with IP's: []
	I0108 22:52:30.775330  329388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/proxy-client.crt ...
	I0108 22:52:30.775369  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/proxy-client.crt: {Name:mk8c3a6942bd97d5fe2207797eeaae499f6f69e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:30.775552  329388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/proxy-client.key ...
	I0108 22:52:30.775576  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/proxy-client.key: {Name:mk86b7126d2264bcd8f98dcb56a9250b6ab11f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:30.775816  329388 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:52:30.775861  329388 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem (1082 bytes)
	I0108 22:52:30.775901  329388 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:52:30.775931  329388 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem (1679 bytes)
	I0108 22:52:30.776698  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:52:30.798706  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:52:30.820054  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:52:30.841304  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 22:52:30.864907  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:52:30.887384  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 22:52:30.909756  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:52:30.931474  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:52:30.953021  329388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:52:30.974884  329388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:52:30.990841  329388 ssh_runner.go:195] Run: openssl version
	I0108 22:52:30.995906  329388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:52:31.004783  329388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:52:31.007993  329388 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:52:31.008041  329388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:52:31.014352  329388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:52:31.022938  329388 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:52:31.025947  329388 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:52:31.025997  329388 kubeadm.go:404] StartCluster: {Name:addons-608450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-608450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:52:31.026103  329388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:52:31.026173  329388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:52:31.059581  329388 cri.go:89] found id: ""
	I0108 22:52:31.059657  329388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:52:31.068124  329388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:52:31.076463  329388 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 22:52:31.076525  329388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:52:31.084687  329388 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:52:31.084755  329388 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 22:52:31.166995  329388 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 22:52:31.230831  329388 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:52:40.354081  329388 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:52:40.354165  329388 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:52:40.354282  329388 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 22:52:40.354356  329388 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 22:52:40.354428  329388 kubeadm.go:322] OS: Linux
	I0108 22:52:40.354525  329388 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 22:52:40.354606  329388 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 22:52:40.354672  329388 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 22:52:40.354744  329388 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 22:52:40.354802  329388 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 22:52:40.354883  329388 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 22:52:40.354950  329388 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0108 22:52:40.355022  329388 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0108 22:52:40.355117  329388 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0108 22:52:40.355219  329388 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:52:40.355402  329388 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:52:40.355545  329388 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:52:40.355676  329388 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:52:40.357092  329388 out.go:204]   - Generating certificates and keys ...
	I0108 22:52:40.357196  329388 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:52:40.357303  329388 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:52:40.357432  329388 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 22:52:40.357536  329388 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 22:52:40.357619  329388 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 22:52:40.357693  329388 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 22:52:40.357762  329388 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 22:52:40.357936  329388 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-608450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 22:52:40.358015  329388 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 22:52:40.358179  329388 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-608450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 22:52:40.358265  329388 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 22:52:40.358387  329388 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 22:52:40.358445  329388 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 22:52:40.358523  329388 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:52:40.358594  329388 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:52:40.358665  329388 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:52:40.358767  329388 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:52:40.358853  329388 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:52:40.358968  329388 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:52:40.359050  329388 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:52:40.360766  329388 out.go:204]   - Booting up control plane ...
	I0108 22:52:40.360856  329388 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:52:40.360918  329388 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:52:40.360979  329388 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:52:40.361064  329388 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:52:40.361140  329388 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:52:40.361194  329388 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:52:40.361327  329388 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:52:40.361412  329388 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.003912 seconds
	I0108 22:52:40.361554  329388 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:52:40.361715  329388 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:52:40.361791  329388 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:52:40.362039  329388 kubeadm.go:322] [mark-control-plane] Marking the node addons-608450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:52:40.362115  329388 kubeadm.go:322] [bootstrap-token] Using token: 6yyx0y.bjqt2ubg60dog39m
	I0108 22:52:40.363530  329388 out.go:204]   - Configuring RBAC rules ...
	I0108 22:52:40.363640  329388 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:52:40.363775  329388 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:52:40.363972  329388 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:52:40.364140  329388 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:52:40.364302  329388 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:52:40.364441  329388 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:52:40.364547  329388 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:52:40.364590  329388 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:52:40.364628  329388 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:52:40.364640  329388 kubeadm.go:322] 
	I0108 22:52:40.364695  329388 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:52:40.364701  329388 kubeadm.go:322] 
	I0108 22:52:40.364794  329388 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:52:40.364811  329388 kubeadm.go:322] 
	I0108 22:52:40.364843  329388 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:52:40.364943  329388 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:52:40.365017  329388 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:52:40.365027  329388 kubeadm.go:322] 
	I0108 22:52:40.365103  329388 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:52:40.365112  329388 kubeadm.go:322] 
	I0108 22:52:40.365176  329388 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:52:40.365185  329388 kubeadm.go:322] 
	I0108 22:52:40.365247  329388 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:52:40.365317  329388 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:52:40.365379  329388 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:52:40.365386  329388 kubeadm.go:322] 
	I0108 22:52:40.365454  329388 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:52:40.365523  329388 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:52:40.365532  329388 kubeadm.go:322] 
	I0108 22:52:40.365617  329388 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6yyx0y.bjqt2ubg60dog39m \
	I0108 22:52:40.365712  329388 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f39001c59ef0575e1bc9780ffa85edc1681f20574bd7111835c2c8563c3cd8d \
	I0108 22:52:40.365735  329388 kubeadm.go:322] 	--control-plane 
	I0108 22:52:40.365741  329388 kubeadm.go:322] 
	I0108 22:52:40.365807  329388 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:52:40.365814  329388 kubeadm.go:322] 
	I0108 22:52:40.365878  329388 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6yyx0y.bjqt2ubg60dog39m \
	I0108 22:52:40.365999  329388 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f39001c59ef0575e1bc9780ffa85edc1681f20574bd7111835c2c8563c3cd8d 
	I0108 22:52:40.366025  329388 cni.go:84] Creating CNI manager for ""
	I0108 22:52:40.366036  329388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:52:40.367761  329388 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 22:52:40.369080  329388 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 22:52:40.373165  329388 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 22:52:40.373184  329388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 22:52:40.393070  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 22:52:41.104556  329388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:52:41.104628  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:41.104660  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=addons-608450 minikube.k8s.io/updated_at=2024_01_08T22_52_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:41.204669  329388 ops.go:34] apiserver oom_adj: -16
	I0108 22:52:41.204803  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:41.705788  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:42.205697  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:42.705119  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:43.205376  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:43.705188  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:44.205115  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:44.705117  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:45.204840  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:45.705881  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:46.205513  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:46.705506  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:47.205764  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:47.705447  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:48.205647  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:48.705839  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:49.205720  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:49.705281  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:50.205302  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:50.704982  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:51.205093  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:51.705767  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:52.205474  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:52.705711  329388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:52:52.778158  329388 kubeadm.go:1088] duration metric: took 11.673591652s to wait for elevateKubeSystemPrivileges.
	I0108 22:52:52.778188  329388 kubeadm.go:406] StartCluster complete in 21.752195667s
	I0108 22:52:52.778209  329388 settings.go:142] acquiring lock: {Name:mkc902113864abc3d31cd188d3cc72ba1bd92615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:52.778314  329388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 22:52:52.778714  329388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/kubeconfig: {Name:mkc128765c68b9b4bae543227dc1d65bab54c68e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:52.778918  329388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:52:52.779071  329388 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 22:52:52.779189  329388 addons.go:69] Setting yakd=true in profile "addons-608450"
	I0108 22:52:52.779209  329388 addons.go:237] Setting addon yakd=true in "addons-608450"
	I0108 22:52:52.779212  329388 config.go:182] Loaded profile config "addons-608450": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:52:52.779288  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.779291  329388 addons.go:69] Setting inspektor-gadget=true in profile "addons-608450"
	I0108 22:52:52.779280  329388 addons.go:69] Setting registry=true in profile "addons-608450"
	I0108 22:52:52.779302  329388 addons.go:237] Setting addon inspektor-gadget=true in "addons-608450"
	I0108 22:52:52.779299  329388 addons.go:69] Setting metrics-server=true in profile "addons-608450"
	I0108 22:52:52.779325  329388 addons.go:237] Setting addon registry=true in "addons-608450"
	I0108 22:52:52.779337  329388 addons.go:237] Setting addon metrics-server=true in "addons-608450"
	I0108 22:52:52.779337  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.779387  329388 addons.go:69] Setting default-storageclass=true in profile "addons-608450"
	I0108 22:52:52.779395  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.779412  329388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-608450"
	I0108 22:52:52.779886  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.779902  329388 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-608450"
	I0108 22:52:52.779917  329388 addons.go:69] Setting storage-provisioner=true in profile "addons-608450"
	I0108 22:52:52.779929  329388 addons.go:69] Setting helm-tiller=true in profile "addons-608450"
	I0108 22:52:52.779934  329388 addons.go:237] Setting addon storage-provisioner=true in "addons-608450"
	I0108 22:52:52.779939  329388 addons.go:237] Setting addon helm-tiller=true in "addons-608450"
	I0108 22:52:52.779993  329388 addons.go:69] Setting gcp-auth=true in profile "addons-608450"
	I0108 22:52:52.780031  329388 mustload.go:65] Loading cluster: addons-608450
	I0108 22:52:52.780181  329388 addons.go:69] Setting ingress-dns=true in profile "addons-608450"
	I0108 22:52:52.780199  329388 addons.go:237] Setting addon ingress-dns=true in "addons-608450"
	I0108 22:52:52.779885  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.779919  329388 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-608450"
	I0108 22:52:52.781795  329388 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-608450"
	I0108 22:52:52.781823  329388 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-608450"
	I0108 22:52:52.782132  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.782334  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.782366  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.782628  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.782798  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.782846  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.783050  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.783171  329388 addons.go:69] Setting cloud-spanner=true in profile "addons-608450"
	I0108 22:52:52.783190  329388 addons.go:237] Setting addon cloud-spanner=true in "addons-608450"
	I0108 22:52:52.783271  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.783551  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.783718  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.783826  329388 config.go:182] Loaded profile config "addons-608450": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:52:52.784079  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.784266  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.784376  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.786374  329388 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-608450"
	I0108 22:52:52.786504  329388 addons.go:69] Setting volumesnapshots=true in profile "addons-608450"
	I0108 22:52:52.786563  329388 addons.go:237] Setting addon volumesnapshots=true in "addons-608450"
	I0108 22:52:52.786639  329388 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-608450"
	I0108 22:52:52.786720  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.786947  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.787008  329388 addons.go:69] Setting ingress=true in profile "addons-608450"
	I0108 22:52:52.787045  329388 addons.go:237] Setting addon ingress=true in "addons-608450"
	I0108 22:52:52.787121  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.787700  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.788265  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.788301  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.779906  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.812128  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.812757  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.817820  329388 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 22:52:52.819168  329388 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 22:52:52.820465  329388 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 22:52:52.820486  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 22:52:52.820541  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.817843  329388 addons.go:237] Setting addon default-storageclass=true in "addons-608450"
	I0108 22:52:52.820981  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.821407  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.824811  329388 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 22:52:52.822632  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.830413  329388 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 22:52:52.831915  329388 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:52:52.831943  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:52:52.830027  329388 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-608450"
	I0108 22:52:52.832009  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.832052  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:52.830117  329388 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 22:52:52.832331  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 22:52:52.832387  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.832741  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	I0108 22:52:52.835577  329388 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0108 22:52:52.838348  329388 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0108 22:52:52.838374  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0108 22:52:52.838453  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.842212  329388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:52:52.846502  329388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:52:52.846645  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.848204  329388 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:52:52.848252  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:52:52.848340  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.848434  329388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:52:52.851022  329388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 22:52:52.852507  329388 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:52:52.852534  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 22:52:52.852598  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.860109  329388 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 22:52:52.866988  329388 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 22:52:52.867022  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 22:52:52.867101  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.881748  329388 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:52:52.881772  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:52:52.881834  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.883664  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.896646  329388 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 22:52:52.890222  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.896561  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.898756  329388 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 22:52:52.901312  329388 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 22:52:52.903140  329388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 22:52:52.899195  329388 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:52:52.904340  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 22:52:52.904414  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.905722  329388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 22:52:52.907613  329388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 22:52:52.906194  329388 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 22:52:52.911307  329388 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 22:52:52.909206  329388 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 22:52:52.909448  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.911360  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.915094  329388 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 22:52:52.916263  329388 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 22:52:52.916280  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 22:52:52.917885  329388 out.go:177]   - Using image docker.io/busybox:stable
	I0108 22:52:52.914903  329388 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 22:52:52.912870  329388 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:52:52.913894  329388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 22:52:52.916396  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.917495  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.917959  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 22:52:52.919504  329388 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:52:52.919520  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 22:52:52.917971  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 22:52:52.919570  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.919588  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.919598  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.921383  329388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 22:52:52.922867  329388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 22:52:52.924439  329388 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 22:52:52.924460  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 22:52:52.924536  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:52.931576  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.931626  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.940655  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.942643  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.947591  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	W0108 22:52:52.947665  329388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0108 22:52:52.947693  329388 retry.go:31] will retry after 324.279628ms: ssh: handshake failed: EOF
	I0108 22:52:52.953822  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:52.976799  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:53.147301  329388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:52:53.250578  329388 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 22:52:53.250668  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 22:52:53.357252  329388 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:52:53.357337  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 22:52:53.359787  329388 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 22:52:53.359862  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 22:52:53.364514  329388 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 22:52:53.364587  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 22:52:53.366435  329388 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:52:53.366508  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 22:52:53.368194  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:52:53.457286  329388 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0108 22:52:53.457377  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0108 22:52:53.462806  329388 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-608450" context rescaled to 1 replicas
	I0108 22:52:53.462912  329388 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:52:53.466274  329388 out.go:177] * Verifying Kubernetes components...
	I0108 22:52:53.467856  329388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:52:53.549757  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:52:53.561149  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:52:53.566167  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:52:53.644963  329388 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 22:52:53.645055  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 22:52:53.649881  329388 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:52:53.649977  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:52:53.650334  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:52:53.650778  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:52:53.653101  329388 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 22:52:53.653124  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 22:52:53.657163  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:52:53.750255  329388 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 22:52:53.750290  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0108 22:52:53.760030  329388 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 22:52:53.760064  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 22:52:53.845067  329388 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:52:53.845170  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:52:53.866509  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 22:52:53.944938  329388 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 22:52:53.945028  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 22:52:53.950035  329388 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 22:52:53.950061  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 22:52:54.144933  329388 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 22:52:54.144957  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 22:52:54.155106  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 22:52:54.257836  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:52:54.345331  329388 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:52:54.345359  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 22:52:54.360606  329388 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 22:52:54.360687  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 22:52:54.451626  329388 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 22:52:54.451676  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 22:52:54.457445  329388 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 22:52:54.457487  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 22:52:54.561839  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:52:54.645114  329388 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 22:52:54.645156  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 22:52:54.662495  329388 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 22:52:54.662585  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 22:52:54.664867  329388 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 22:52:54.664952  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 22:52:54.944111  329388 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 22:52:54.944142  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 22:52:55.047167  329388 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 22:52:55.047249  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 22:52:55.161225  329388 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 22:52:55.161312  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 22:52:55.347304  329388 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 22:52:55.347385  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 22:52:55.351203  329388 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 22:52:55.351235  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 22:52:55.547378  329388 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 22:52:55.547410  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 22:52:55.760410  329388 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:52:55.760494  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 22:52:55.866614  329388 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 22:52:55.866645  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 22:52:55.866999  329388 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:52:55.867014  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 22:52:56.051667  329388 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.904293859s)
	I0108 22:52:56.051700  329388 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 22:52:56.146640  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:52:56.345779  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:52:56.445615  329388 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 22:52:56.445702  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 22:52:56.649486  329388 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:52:56.649570  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 22:52:57.057866  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:52:58.764111  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.395873482s)
	I0108 22:52:58.764217  329388 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.296333359s)
	I0108 22:52:58.765281  329388 node_ready.go:35] waiting up to 6m0s for node "addons-608450" to be "Ready" ...
	I0108 22:52:59.648313  329388 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 22:52:59.648383  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:59.668753  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:52:59.865451  329388 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 22:52:59.866634  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.31683487s)
	I0108 22:52:59.866669  329388 addons.go:473] Verifying addon ingress=true in "addons-608450"
	I0108 22:52:59.866717  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.30547507s)
	I0108 22:52:59.868575  329388 out.go:177] * Verifying ingress addon...
	I0108 22:52:59.866752  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.300502742s)
	I0108 22:52:59.866801  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.216409636s)
	I0108 22:52:59.866839  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.215993741s)
	I0108 22:52:59.866870  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.209642426s)
	I0108 22:52:59.866912  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.000368406s)
	I0108 22:52:59.866974  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.711775665s)
	I0108 22:52:59.867066  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.609186839s)
	I0108 22:52:59.867125  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.305250265s)
	I0108 22:52:59.867213  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.720491132s)
	I0108 22:52:59.867324  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.521494336s)
	I0108 22:52:59.870257  329388 addons.go:473] Verifying addon registry=true in "addons-608450"
	I0108 22:52:59.870285  329388 addons.go:473] Verifying addon metrics-server=true in "addons-608450"
	I0108 22:52:59.871991  329388 out.go:177] * Verifying registry addon...
	W0108 22:52:59.870382  329388 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:52:59.871188  329388 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 22:52:59.873821  329388 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-608450 service yakd-dashboard -n yakd-dashboard
	
	
	I0108 22:52:59.873951  329388 retry.go:31] will retry after 210.619817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:52:59.874579  329388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 22:52:59.947696  329388 addons.go:237] Setting addon gcp-auth=true in "addons-608450"
	I0108 22:52:59.947756  329388 host.go:66] Checking if "addons-608450" exists ...
	I0108 22:52:59.947821  329388 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 22:52:59.947839  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:52:59.948107  329388 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 22:52:59.948128  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:52:59.948305  329388 cli_runner.go:164] Run: docker container inspect addons-608450 --format={{.State.Status}}
	W0108 22:52:59.949110  329388 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0108 22:52:59.969385  329388 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 22:52:59.969468  329388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-608450
	I0108 22:52:59.988320  329388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/addons-608450/id_rsa Username:docker}
	I0108 22:53:00.086416  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:53:00.377485  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:00.380206  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:00.854328  329388 node_ready.go:58] node "addons-608450" has status "Ready":"False"
	I0108 22:53:00.952030  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:00.952075  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:01.444922  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.386897881s)
	I0108 22:53:01.445168  329388 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-608450"
	I0108 22:53:01.452926  329388 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 22:53:01.451374  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:01.451973  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:01.455683  329388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 22:53:01.461056  329388 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 22:53:01.461079  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:01.878395  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:01.880458  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:01.960597  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:02.447569  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:02.448998  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:02.462096  329388 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.492664454s)
	I0108 22:53:02.464284  329388 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 22:53:02.462376  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.375896523s)
	I0108 22:53:02.467578  329388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:53:02.467577  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:02.545251  329388 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 22:53:02.545345  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 22:53:02.644920  329388 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 22:53:02.644999  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 22:53:02.670283  329388 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:53:02.670361  329388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 22:53:02.765314  329388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:53:02.947989  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:02.948590  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:02.966951  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:03.269587  329388 node_ready.go:58] node "addons-608450" has status "Ready":"False"
	I0108 22:53:03.377693  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:03.446902  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:03.460501  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:03.947183  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:03.948109  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:03.962116  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:04.447923  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:04.449184  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:04.468885  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:04.559351  329388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.793978314s)
	I0108 22:53:04.560337  329388 addons.go:473] Verifying addon gcp-auth=true in "addons-608450"
	I0108 22:53:04.562457  329388 out.go:177] * Verifying gcp-auth addon...
	I0108 22:53:04.564766  329388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 22:53:04.567627  329388 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 22:53:04.567646  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:04.877842  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:04.880046  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:04.960982  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:05.068720  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:05.378587  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:05.379667  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:05.460791  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:05.568994  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:05.768833  329388 node_ready.go:58] node "addons-608450" has status "Ready":"False"
	I0108 22:53:05.877900  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:05.879979  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:05.961528  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:06.068363  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:06.377886  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:06.379732  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:06.461296  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:06.568765  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:06.878812  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:06.879016  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:06.959958  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:07.068770  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:07.378403  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:07.379147  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:07.460093  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:07.569018  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:07.769272  329388 node_ready.go:58] node "addons-608450" has status "Ready":"False"
	I0108 22:53:07.880443  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:07.883420  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:07.961324  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:08.068146  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:08.377920  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:08.379130  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:08.460120  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:08.568618  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:08.877854  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:08.879649  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:08.960412  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:09.068234  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:09.377686  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:09.379919  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:09.460864  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:09.569097  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:09.877932  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:09.879781  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:09.960428  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:10.068337  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:10.269439  329388 node_ready.go:58] node "addons-608450" has status "Ready":"False"
	I0108 22:53:10.378663  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:10.379018  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:10.459916  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:10.568541  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:10.878428  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:10.879456  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:10.960523  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:11.068525  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:11.377653  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:11.379376  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:11.460287  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:11.568156  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:11.878068  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:11.879702  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:11.960625  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:12.068954  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:12.378355  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:12.379725  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:12.460534  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:12.568415  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:12.768747  329388 node_ready.go:58] node "addons-608450" has status "Ready":"False"
	I0108 22:53:12.877764  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:12.879436  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:12.960134  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:13.068935  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:13.377482  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:13.379461  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:13.460880  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:13.568547  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:13.878038  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:13.879877  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:13.960831  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:14.068946  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:14.377682  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:14.379546  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:14.460400  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:14.567766  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:14.768852  329388 node_ready.go:58] node "addons-608450" has status "Ready":"False"
	I0108 22:53:14.877596  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:14.879377  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:14.960419  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:15.068619  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:15.378147  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:15.378802  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:15.461021  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:15.568765  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:15.878137  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:15.879888  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:15.960811  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:16.068368  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:16.378330  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:16.378446  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:16.460539  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:16.568179  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:16.769251  329388 node_ready.go:58] node "addons-608450" has status "Ready":"False"
	I0108 22:53:16.877938  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:16.881356  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:16.960136  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:17.067941  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:17.377130  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:17.379099  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:17.460382  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:17.568106  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:17.877811  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:17.879656  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:17.960498  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:18.068483  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:18.378190  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:18.379094  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:18.460272  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:18.567676  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:18.878077  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:18.880041  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:18.959878  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:19.069044  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:19.269264  329388 node_ready.go:58] node "addons-608450" has status "Ready":"False"
	I0108 22:53:19.378240  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:19.379834  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:19.461621  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:19.568350  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:19.878092  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:19.879994  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:19.959844  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:20.068734  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:20.378008  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:20.379555  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:20.460952  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:20.568120  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:20.877722  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:20.879330  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:20.960332  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:21.068261  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:21.269389  329388 node_ready.go:58] node "addons-608450" has status "Ready":"False"
	I0108 22:53:21.378057  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:21.379506  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:21.460687  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:21.568090  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:21.877883  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:21.879542  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:21.960187  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:22.068218  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:22.378427  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:22.379000  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:22.459807  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:22.568280  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:22.878588  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:22.879437  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:22.960198  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:23.070063  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:23.377758  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:23.379595  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:23.460608  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:23.568092  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:23.769209  329388 node_ready.go:58] node "addons-608450" has status "Ready":"False"
	I0108 22:53:23.877845  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:23.879652  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:23.960222  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:24.068064  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:24.377321  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:24.379682  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:24.460619  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:24.568161  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:24.877983  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:24.879456  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:24.960295  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:25.067905  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:25.270312  329388 node_ready.go:49] node "addons-608450" has status "Ready":"True"
	I0108 22:53:25.270389  329388 node_ready.go:38] duration metric: took 26.504991629s waiting for node "addons-608450" to be "Ready" ...
	I0108 22:53:25.270414  329388 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:53:25.354302  329388 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sd49x" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:25.378552  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:25.446518  329388 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 22:53:25.446544  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:25.461054  329388 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 22:53:25.461077  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:25.568765  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:25.879336  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:25.881473  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:25.962309  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:26.068086  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:26.379083  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:26.380664  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:26.461914  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:26.568702  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:26.861078  329388 pod_ready.go:92] pod "coredns-5dd5756b68-sd49x" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:26.861113  329388 pod_ready.go:81] duration metric: took 1.506723306s waiting for pod "coredns-5dd5756b68-sd49x" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:26.861144  329388 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-608450" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:26.867159  329388 pod_ready.go:92] pod "etcd-addons-608450" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:26.867189  329388 pod_ready.go:81] duration metric: took 6.037394ms waiting for pod "etcd-addons-608450" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:26.867204  329388 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-608450" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:26.873678  329388 pod_ready.go:92] pod "kube-apiserver-addons-608450" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:26.873713  329388 pod_ready.go:81] duration metric: took 6.498998ms waiting for pod "kube-apiserver-addons-608450" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:26.873728  329388 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-608450" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:26.877929  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:26.880521  329388 pod_ready.go:92] pod "kube-controller-manager-addons-608450" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:26.880552  329388 pod_ready.go:81] duration metric: took 6.815157ms waiting for pod "kube-controller-manager-addons-608450" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:26.880570  329388 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5x2h4" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:26.882216  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:26.886748  329388 pod_ready.go:92] pod "kube-proxy-5x2h4" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:26.886774  329388 pod_ready.go:81] duration metric: took 6.195448ms waiting for pod "kube-proxy-5x2h4" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:26.886786  329388 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-608450" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:26.962825  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:27.068909  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:27.269776  329388 pod_ready.go:92] pod "kube-scheduler-addons-608450" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:27.269804  329388 pod_ready.go:81] duration metric: took 383.008723ms waiting for pod "kube-scheduler-addons-608450" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:27.269818  329388 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-s42rt" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:27.378056  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:27.379913  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:27.461659  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:27.568574  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:27.878582  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:27.886191  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:27.962719  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:28.069104  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:28.378367  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:28.380816  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:28.461490  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:28.568915  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:28.879107  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:28.879808  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:28.961303  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:29.068457  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:29.275788  329388 pod_ready.go:102] pod "metrics-server-7c66d45ddc-s42rt" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:29.377618  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:29.379438  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:29.461026  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:29.568422  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:29.878370  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:29.879285  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:29.960960  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:30.068226  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:30.377783  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:30.379889  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:30.461426  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:30.568911  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:30.878475  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:30.879751  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:30.961260  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:31.068977  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:31.276132  329388 pod_ready.go:102] pod "metrics-server-7c66d45ddc-s42rt" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:31.377695  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:31.380280  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:31.461422  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:31.568612  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:31.878560  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:31.880601  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:31.961244  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:32.068834  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:32.380043  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:32.380243  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:32.462589  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:32.570130  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:32.949195  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:32.949922  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:32.965884  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:33.069153  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:33.347365  329388 pod_ready.go:102] pod "metrics-server-7c66d45ddc-s42rt" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:33.449114  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:33.449938  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:33.462351  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:33.569590  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:33.878419  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:33.880299  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:33.963531  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:34.069236  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:34.378441  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:34.380181  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:34.462343  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:34.568993  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:34.878778  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:34.884826  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:34.962787  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:35.069558  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:35.378627  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:35.380874  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:35.462364  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:35.568890  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:35.776591  329388 pod_ready.go:102] pod "metrics-server-7c66d45ddc-s42rt" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:35.878829  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:35.880367  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:35.961462  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:36.069209  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:36.378732  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:36.380666  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:36.461181  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:36.568690  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:36.878863  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:36.880866  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:36.962622  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:37.068814  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:37.378047  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:37.379967  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:37.461144  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:37.568174  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:37.878852  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:37.880314  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:37.961037  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:38.068496  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:38.277250  329388 pod_ready.go:102] pod "metrics-server-7c66d45ddc-s42rt" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:38.378473  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:38.380457  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:38.461568  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:38.568136  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:38.877304  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:38.879845  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:38.961563  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:39.068606  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:39.378001  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:39.380378  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:39.460929  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:39.568069  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:39.878748  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:39.879946  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:39.961503  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:40.068971  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:40.378620  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:40.381356  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:40.462265  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:40.570849  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:40.847302  329388 pod_ready.go:92] pod "metrics-server-7c66d45ddc-s42rt" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:40.847336  329388 pod_ready.go:81] duration metric: took 13.577509752s waiting for pod "metrics-server-7c66d45ddc-s42rt" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:40.847352  329388 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9fbl6" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:40.854766  329388 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-9fbl6" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:40.854800  329388 pod_ready.go:81] duration metric: took 7.439042ms waiting for pod "nvidia-device-plugin-daemonset-9fbl6" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:40.854829  329388 pod_ready.go:38] duration metric: took 15.584392276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:53:40.854892  329388 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:53:40.854959  329388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:53:40.869760  329388 api_server.go:72] duration metric: took 47.406772872s to wait for apiserver process to appear ...
	I0108 22:53:40.869843  329388 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:53:40.869881  329388 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 22:53:40.876429  329388 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 22:53:40.945025  329388 api_server.go:141] control plane version: v1.28.4
	I0108 22:53:40.945070  329388 api_server.go:131] duration metric: took 75.206439ms to wait for apiserver health ...
	I0108 22:53:40.945083  329388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:53:40.948341  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:40.948564  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:41.054109  329388 system_pods.go:59] 19 kube-system pods found
	I0108 22:53:41.054161  329388 system_pods.go:61] "coredns-5dd5756b68-sd49x" [0b8196f0-ee6e-4fe3-b00c-067228595959] Running
	I0108 22:53:41.054174  329388 system_pods.go:61] "csi-hostpath-attacher-0" [291dd835-625a-4e5f-8a97-e28bef6f4f42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0108 22:53:41.054184  329388 system_pods.go:61] "csi-hostpath-resizer-0" [f73b05b7-a195-48dc-8c9c-0c8127990176] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0108 22:53:41.054195  329388 system_pods.go:61] "csi-hostpathplugin-4zcjs" [eabe5207-5806-4a3a-8360-6b107da13deb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 22:53:41.054210  329388 system_pods.go:61] "etcd-addons-608450" [17bd2c6d-08fa-4856-b2b9-f76f1c377f22] Running
	I0108 22:53:41.054223  329388 system_pods.go:61] "kindnet-nnd5g" [b95645c2-e51a-4fb3-986a-35b98d294cc7] Running
	I0108 22:53:41.054229  329388 system_pods.go:61] "kube-apiserver-addons-608450" [2b375b26-75f3-438b-9460-2effa4dcde8e] Running
	I0108 22:53:41.054241  329388 system_pods.go:61] "kube-controller-manager-addons-608450" [b7080640-4f2d-4000-9c5e-d57c3b2baa0d] Running
	I0108 22:53:41.054259  329388 system_pods.go:61] "kube-ingress-dns-minikube" [945400d4-8f0f-428a-a8c3-9ec8ff72f174] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0108 22:53:41.054272  329388 system_pods.go:61] "kube-proxy-5x2h4" [705320dc-556b-48ca-beb4-0e24a944db17] Running
	I0108 22:53:41.054289  329388 system_pods.go:61] "kube-scheduler-addons-608450" [2b87c398-569d-41c7-82bb-f73652533ba3] Running
	I0108 22:53:41.054294  329388 system_pods.go:61] "metrics-server-7c66d45ddc-s42rt" [a1314306-1372-43ec-bc21-115b88b40633] Running
	I0108 22:53:41.054300  329388 system_pods.go:61] "nvidia-device-plugin-daemonset-9fbl6" [c5f6cd8f-ab46-4842-a3a0-32b3d1ad0604] Running
	I0108 22:53:41.054315  329388 system_pods.go:61] "registry-proxy-hbq4v" [df8417ba-d80c-4116-a735-7d5e7a4728b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0108 22:53:41.054323  329388 system_pods.go:61] "registry-rkzl5" [1fcdf9f1-94c3-44b6-90c7-f65016ba020b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0108 22:53:41.054338  329388 system_pods.go:61] "snapshot-controller-58dbcc7b99-fcpgr" [0c19c95f-0638-4a70-8831-71f6b189f6b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 22:53:41.054352  329388 system_pods.go:61] "snapshot-controller-58dbcc7b99-p792p" [92fcfe2a-70c1-4887-a658-f608199f7086] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 22:53:41.054365  329388 system_pods.go:61] "storage-provisioner" [8f1e2bc9-1820-4dd6-9be4-e8a069b5ad4e] Running
	I0108 22:53:41.054373  329388 system_pods.go:61] "tiller-deploy-7b677967b9-jkj4j" [4e5dde9a-5f31-41be-b435-cbe1564e4068] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0108 22:53:41.054388  329388 system_pods.go:74] duration metric: took 109.247324ms to wait for pod list to return data ...
	I0108 22:53:41.054411  329388 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:53:41.061628  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:41.062998  329388 default_sa.go:45] found service account: "default"
	I0108 22:53:41.063027  329388 default_sa.go:55] duration metric: took 8.60143ms for default service account to be created ...
	I0108 22:53:41.063039  329388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:53:41.145975  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:41.150103  329388 system_pods.go:86] 19 kube-system pods found
	I0108 22:53:41.150134  329388 system_pods.go:89] "coredns-5dd5756b68-sd49x" [0b8196f0-ee6e-4fe3-b00c-067228595959] Running
	I0108 22:53:41.150143  329388 system_pods.go:89] "csi-hostpath-attacher-0" [291dd835-625a-4e5f-8a97-e28bef6f4f42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0108 22:53:41.150153  329388 system_pods.go:89] "csi-hostpath-resizer-0" [f73b05b7-a195-48dc-8c9c-0c8127990176] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0108 22:53:41.150165  329388 system_pods.go:89] "csi-hostpathplugin-4zcjs" [eabe5207-5806-4a3a-8360-6b107da13deb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 22:53:41.150184  329388 system_pods.go:89] "etcd-addons-608450" [17bd2c6d-08fa-4856-b2b9-f76f1c377f22] Running
	I0108 22:53:41.150189  329388 system_pods.go:89] "kindnet-nnd5g" [b95645c2-e51a-4fb3-986a-35b98d294cc7] Running
	I0108 22:53:41.150194  329388 system_pods.go:89] "kube-apiserver-addons-608450" [2b375b26-75f3-438b-9460-2effa4dcde8e] Running
	I0108 22:53:41.150199  329388 system_pods.go:89] "kube-controller-manager-addons-608450" [b7080640-4f2d-4000-9c5e-d57c3b2baa0d] Running
	I0108 22:53:41.150209  329388 system_pods.go:89] "kube-ingress-dns-minikube" [945400d4-8f0f-428a-a8c3-9ec8ff72f174] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0108 22:53:41.150214  329388 system_pods.go:89] "kube-proxy-5x2h4" [705320dc-556b-48ca-beb4-0e24a944db17] Running
	I0108 22:53:41.150226  329388 system_pods.go:89] "kube-scheduler-addons-608450" [2b87c398-569d-41c7-82bb-f73652533ba3] Running
	I0108 22:53:41.150230  329388 system_pods.go:89] "metrics-server-7c66d45ddc-s42rt" [a1314306-1372-43ec-bc21-115b88b40633] Running
	I0108 22:53:41.150234  329388 system_pods.go:89] "nvidia-device-plugin-daemonset-9fbl6" [c5f6cd8f-ab46-4842-a3a0-32b3d1ad0604] Running
	I0108 22:53:41.150240  329388 system_pods.go:89] "registry-proxy-hbq4v" [df8417ba-d80c-4116-a735-7d5e7a4728b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0108 22:53:41.150254  329388 system_pods.go:89] "registry-rkzl5" [1fcdf9f1-94c3-44b6-90c7-f65016ba020b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0108 22:53:41.150274  329388 system_pods.go:89] "snapshot-controller-58dbcc7b99-fcpgr" [0c19c95f-0638-4a70-8831-71f6b189f6b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 22:53:41.150297  329388 system_pods.go:89] "snapshot-controller-58dbcc7b99-p792p" [92fcfe2a-70c1-4887-a658-f608199f7086] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 22:53:41.150301  329388 system_pods.go:89] "storage-provisioner" [8f1e2bc9-1820-4dd6-9be4-e8a069b5ad4e] Running
	I0108 22:53:41.150310  329388 system_pods.go:89] "tiller-deploy-7b677967b9-jkj4j" [4e5dde9a-5f31-41be-b435-cbe1564e4068] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0108 22:53:41.150320  329388 system_pods.go:126] duration metric: took 87.275698ms to wait for k8s-apps to be running ...
	I0108 22:53:41.150329  329388 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:53:41.150388  329388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:53:41.163622  329388 system_svc.go:56] duration metric: took 13.279337ms WaitForService to wait for kubelet.
	I0108 22:53:41.163652  329388 kubeadm.go:581] duration metric: took 47.700674464s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:53:41.163682  329388 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:53:41.166847  329388 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 22:53:41.166872  329388 node_conditions.go:123] node cpu capacity is 8
	I0108 22:53:41.166886  329388 node_conditions.go:105] duration metric: took 3.198535ms to run NodePressure ...
	I0108 22:53:41.166909  329388 start.go:228] waiting for startup goroutines ...
	I0108 22:53:41.378441  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:41.380502  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:41.469984  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:41.568955  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:41.877621  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:41.880386  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:41.961399  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:42.069587  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:42.378538  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:42.380919  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:42.462190  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:42.568582  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:42.878843  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:42.880996  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:42.962451  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:43.068772  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:43.380834  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:43.381027  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:43.461416  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:43.569095  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:43.950599  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:43.950712  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:43.961953  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:44.069501  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:44.379634  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:44.446052  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:44.462407  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:44.568839  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:44.878396  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:44.880825  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:44.962374  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:45.069035  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:45.378218  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:45.380629  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:45.461159  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:45.568881  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:45.879093  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:45.880965  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:45.961567  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:46.069425  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:46.379154  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:46.380191  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:46.462018  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:46.568783  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:46.879701  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:46.880674  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:46.961502  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:47.069850  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:47.378308  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:47.379988  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:47.462203  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:47.567680  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:47.878388  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:47.882262  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:47.962161  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:48.069070  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:48.378289  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:48.380356  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:48.461232  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:48.568237  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:48.878634  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:48.881117  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:48.961632  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:49.069091  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:49.379136  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:49.381505  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:49.462199  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:49.568375  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:49.878099  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:49.880469  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:49.960939  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:50.068381  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:50.379480  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:50.380260  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:50.461270  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:50.567934  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:50.878399  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:50.880372  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:50.960893  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:51.068711  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:51.377867  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:51.379891  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:51.462371  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:51.568256  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:51.880118  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:51.880143  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:51.961382  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:52.068612  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:52.377586  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:52.381647  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:52.461327  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:52.568643  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:52.878315  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:52.879559  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:52.961605  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:53.068482  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:53.378655  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:53.379601  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:53.461072  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:53.568671  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:53.879349  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:53.880185  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:53.962358  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:54.069052  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:54.378197  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:54.380524  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:54.461246  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:54.568567  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:54.880021  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:54.880328  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:54.961640  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:55.069743  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:55.378030  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:55.379750  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:55.461487  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:55.568558  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:55.879516  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:55.880546  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:55.961982  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:56.068534  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:56.378962  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:56.379597  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:56.461888  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:56.569349  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:56.878817  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:56.881709  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:56.961669  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:57.068109  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:57.378847  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:57.380721  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:57.461521  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:57.568557  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:57.877802  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:57.880219  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:57.960858  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:58.068591  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:58.378409  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:58.379749  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:58.461475  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:58.570872  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:58.878534  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:58.879621  329388 kapi.go:107] duration metric: took 59.005041471s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 22:53:58.962342  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:59.068726  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:59.378412  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:59.462401  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:59.568616  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:59.949338  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:59.962013  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:00.069025  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:00.377604  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:00.461832  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:00.568766  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:00.946861  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:00.962827  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:01.069669  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:01.378401  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:01.462287  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:01.568323  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:01.878885  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:01.960890  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:02.175646  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:02.379023  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:02.462993  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:02.567876  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:02.878935  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:02.960894  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:03.068772  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:03.378205  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:03.461696  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:03.570595  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:03.878388  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:03.966347  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:04.069126  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:04.447857  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:04.462776  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:04.648061  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:04.964483  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:05.048020  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:05.071231  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:05.447368  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:05.463030  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:05.569497  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:05.948866  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:05.968456  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:06.070856  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:06.378577  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:06.463763  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:06.569366  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:06.879024  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:06.962518  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:07.069419  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:07.378856  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:07.461129  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:07.568742  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:07.878122  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:07.961805  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:08.095142  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:08.378045  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:08.461568  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:08.569015  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:08.878871  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:08.961469  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:09.068720  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:09.377979  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:09.461732  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:09.568204  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:09.878003  329388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:09.961460  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:10.069031  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:10.378282  329388 kapi.go:107] duration metric: took 1m10.507092463s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 22:54:10.462047  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:10.568195  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:10.961992  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:11.068195  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:11.461483  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:11.568801  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:11.962482  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:12.068620  329388 kapi.go:107] duration metric: took 1m7.503846435s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 22:54:12.071254  329388 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-608450 cluster.
	I0108 22:54:12.073630  329388 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 22:54:12.075209  329388 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 22:54:12.461629  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:12.960937  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:13.474281  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:13.964007  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:14.461342  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:14.962329  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:15.461692  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:15.960849  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:16.461395  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:16.960665  329388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:17.461121  329388 kapi.go:107] duration metric: took 1m16.005436652s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 22:54:17.463427  329388 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, inspektor-gadget, cloud-spanner, helm-tiller, metrics-server, nvidia-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0108 22:54:17.465009  329388 addons.go:508] enable addons completed in 1m24.685959831s: enabled=[storage-provisioner ingress-dns inspektor-gadget cloud-spanner helm-tiller metrics-server nvidia-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0108 22:54:17.465048  329388 start.go:233] waiting for cluster config update ...
	I0108 22:54:17.465069  329388 start.go:242] writing updated cluster config ...
	I0108 22:54:17.465339  329388 ssh_runner.go:195] Run: rm -f paused
	I0108 22:54:17.514699  329388 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:54:17.516839  329388 out.go:177] * Done! kubectl is now configured to use "addons-608450" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 22:56:51 addons-608450 crio[945]: time="2024-01-08 22:56:51.235985610Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=f77e8dfd-4fa1-4c4a-99b2-e28f22d9b2f9 name=/runtime.v1.ImageService/PullImage
	Jan 08 22:56:51 addons-608450 crio[945]: time="2024-01-08 22:56:51.236823294Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=9ba2643b-821d-41db-aaab-0e213a850f8d name=/runtime.v1.ImageService/ImageStatus
	Jan 08 22:56:51 addons-608450 crio[945]: time="2024-01-08 22:56:51.237813540Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=9ba2643b-821d-41db-aaab-0e213a850f8d name=/runtime.v1.ImageService/ImageStatus
	Jan 08 22:56:51 addons-608450 crio[945]: time="2024-01-08 22:56:51.238580795Z" level=info msg="Creating container: default/hello-world-app-5d77478584-vwxbm/hello-world-app" id=3b654de6-3e4e-405a-9e1b-b61e42a729cd name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 22:56:51 addons-608450 crio[945]: time="2024-01-08 22:56:51.238683824Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 22:56:51 addons-608450 crio[945]: time="2024-01-08 22:56:51.316727706Z" level=info msg="Created container f95ad35dc4fde944bb08ac1db7c6f470521f604ec66611be1c7cd8657d5b9d12: default/hello-world-app-5d77478584-vwxbm/hello-world-app" id=3b654de6-3e4e-405a-9e1b-b61e42a729cd name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 22:56:51 addons-608450 crio[945]: time="2024-01-08 22:56:51.317362792Z" level=info msg="Starting container: f95ad35dc4fde944bb08ac1db7c6f470521f604ec66611be1c7cd8657d5b9d12" id=52fd3a8d-928f-4c91-88fa-095035ed098f name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 22:56:51 addons-608450 crio[945]: time="2024-01-08 22:56:51.323479871Z" level=info msg="Started container" PID=9889 containerID=f95ad35dc4fde944bb08ac1db7c6f470521f604ec66611be1c7cd8657d5b9d12 description=default/hello-world-app-5d77478584-vwxbm/hello-world-app id=52fd3a8d-928f-4c91-88fa-095035ed098f name=/runtime.v1.RuntimeService/StartContainer sandboxID=65e9fa03cccba0fc11ff02f103715e89a3fc308ae6a34a0699d1a373c90b3243
	Jan 08 22:56:51 addons-608450 crio[945]: time="2024-01-08 22:56:51.608523974Z" level=info msg="Removing container: c1c1d0be2e1b1f47cc3abe50fd79718ae79df1a552d69513990fd58720bf89c2" id=dc8833e9-326a-4edb-9864-8024302ab3cf name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 22:56:51 addons-608450 crio[945]: time="2024-01-08 22:56:51.623463753Z" level=info msg="Removed container c1c1d0be2e1b1f47cc3abe50fd79718ae79df1a552d69513990fd58720bf89c2: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=dc8833e9-326a-4edb-9864-8024302ab3cf name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 22:56:53 addons-608450 crio[945]: time="2024-01-08 22:56:53.192112866Z" level=info msg="Stopping container: 3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10 (timeout: 2s)" id=e775bb0e-42f6-474e-86ed-2a1e5e4c4853 name=/runtime.v1.RuntimeService/StopContainer
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.197954919Z" level=warning msg="Stopping container 3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=e775bb0e-42f6-474e-86ed-2a1e5e4c4853 name=/runtime.v1.RuntimeService/StopContainer
	Jan 08 22:56:55 addons-608450 conmon[5776]: conmon 3da3eed6971eab24a54e <ninfo>: container 5788 exited with status 137
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.331339843Z" level=info msg="Stopped container 3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10: ingress-nginx/ingress-nginx-controller-69cff4fd79-fjdkg/controller" id=e775bb0e-42f6-474e-86ed-2a1e5e4c4853 name=/runtime.v1.RuntimeService/StopContainer
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.331930046Z" level=info msg="Stopping pod sandbox: 02706752d75ecb3a7fad50a21dd609a7bdc265f897db97cc322057da52171fce" id=c8c1d256-31dc-42d0-9922-858e37cad529 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.335101096Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-OQKIJ5FJACAHEZCS - [0:0]\n:KUBE-HP-QPSH3FFXEYRNMDZW - [0:0]\n-X KUBE-HP-OQKIJ5FJACAHEZCS\n-X KUBE-HP-QPSH3FFXEYRNMDZW\nCOMMIT\n"
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.336519742Z" level=info msg="Closing host port tcp:80"
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.336578647Z" level=info msg="Closing host port tcp:443"
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.337937961Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.337961222Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.338105358Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-fjdkg Namespace:ingress-nginx ID:02706752d75ecb3a7fad50a21dd609a7bdc265f897db97cc322057da52171fce UID:894574e8-b36a-43c6-9bc5-951b3eb67e15 NetNS:/var/run/netns/0d932dda-551d-4399-8c89-a18be8dbfc2a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.338248457Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-fjdkg from CNI network \"kindnet\" (type=ptp)"
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.364795409Z" level=info msg="Stopped pod sandbox: 02706752d75ecb3a7fad50a21dd609a7bdc265f897db97cc322057da52171fce" id=c8c1d256-31dc-42d0-9922-858e37cad529 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.620623461Z" level=info msg="Removing container: 3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10" id=5a0873e4-9ef6-462d-be79-bc539dc9c4ff name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 22:56:55 addons-608450 crio[945]: time="2024-01-08 22:56:55.633884158Z" level=info msg="Removed container 3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10: ingress-nginx/ingress-nginx-controller-69cff4fd79-fjdkg/controller" id=5a0873e4-9ef6-462d-be79-bc539dc9c4ff name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f95ad35dc4fde       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      9 seconds ago       Running             hello-world-app           0                   65e9fa03cccba       hello-world-app-5d77478584-vwxbm
	bab1f49eca04f       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   77a71cd3d86dc       headlamp-7ddfbb94ff-b8l48
	2d1ff198e4abb       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   7f873dae6a746       nginx
	d8c2715ddb12a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   d7273fe7127e4       gcp-auth-d4c87556c-mj4rv
	d991e473a3c6a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   2 minutes ago       Exited              patch                     0                   11cd1e4af69e6       ingress-nginx-admission-patch-gwq8v
	c95252dde770a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   4004293713e15       ingress-nginx-admission-create-q2cpz
	1ba9d4a330f99       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   a56b861e586b7       yakd-dashboard-9947fc6bf-wqlww
	14d48627094de       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   326a49a590c44       coredns-5dd5756b68-sd49x
	5b8c7d6c0c91f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   b38bcac356a56       storage-provisioner
	c1b297c84876a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   bba68903af4f2       kube-proxy-5x2h4
	081b971d00305       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   08abf3a68aae6       kindnet-nnd5g
	92a84b3acbbc0       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   b95eca70c346a       kube-controller-manager-addons-608450
	351900173d9b8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   fd4e4303cd948       etcd-addons-608450
	da6d5ed92f87d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   5385980c736f8       kube-apiserver-addons-608450
	6ce8407e1746c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   aad333f607db3       kube-scheduler-addons-608450
	
	
	==> coredns [14d48627094de713d89febbd4f22343e6f202e3aa72bafa5d136173bf3853484] <==
	[INFO] 10.244.0.10:52840 - 19 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089584s
	[INFO] 10.244.0.10:39380 - 45473 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004545131s
	[INFO] 10.244.0.10:39380 - 37027 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004678761s
	[INFO] 10.244.0.10:49013 - 6034 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004396562s
	[INFO] 10.244.0.10:49013 - 27022 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005019032s
	[INFO] 10.244.0.10:44853 - 17493 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00439412s
	[INFO] 10.244.0.10:44853 - 28761 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004874519s
	[INFO] 10.244.0.10:41042 - 61012 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000097679s
	[INFO] 10.244.0.10:41042 - 53847 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149875s
	[INFO] 10.244.0.21:57847 - 16722 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00021608s
	[INFO] 10.244.0.21:52193 - 1763 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000221407s
	[INFO] 10.244.0.21:33206 - 30603 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196403s
	[INFO] 10.244.0.21:47229 - 25839 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000180635s
	[INFO] 10.244.0.21:41232 - 35747 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113749s
	[INFO] 10.244.0.21:44963 - 1046 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000158492s
	[INFO] 10.244.0.21:44592 - 47490 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006569257s
	[INFO] 10.244.0.21:47664 - 45904 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007407692s
	[INFO] 10.244.0.21:55233 - 46009 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006531285s
	[INFO] 10.244.0.21:55874 - 9470 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006677542s
	[INFO] 10.244.0.21:41620 - 3868 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005220029s
	[INFO] 10.244.0.21:47774 - 61564 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0053659s
	[INFO] 10.244.0.21:52504 - 9467 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000843357s
	[INFO] 10.244.0.21:58717 - 895 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00095369s
	[INFO] 10.244.0.23:53429 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000181202s
	[INFO] 10.244.0.23:57667 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000177284s
	
	
	==> describe nodes <==
	Name:               addons-608450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-608450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=addons-608450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_52_41_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-608450
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:52:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-608450
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:56:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:55:44 +0000   Mon, 08 Jan 2024 22:52:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:55:44 +0000   Mon, 08 Jan 2024 22:52:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:55:44 +0000   Mon, 08 Jan 2024 22:52:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:55:44 +0000   Mon, 08 Jan 2024 22:53:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-608450
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4e8f327bc7646c3a4fc1d5e164b53aa
	  System UUID:                c66eb74c-f42f-4c12-8af3-46737d340cc0
	  Boot ID:                    fd589fcb-cd24-44e5-9159-e7f1d22abeda
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-vwxbm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-d4c87556c-mj4rv                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  headlamp                    headlamp-7ddfbb94ff-b8l48                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 coredns-5dd5756b68-sd49x                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m8s
	  kube-system                 etcd-addons-608450                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m20s
	  kube-system                 kindnet-nnd5g                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m8s
	  kube-system                 kube-apiserver-addons-608450             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-controller-manager-addons-608450    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-proxy-5x2h4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-addons-608450             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-wqlww           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  Starting                 4m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node addons-608450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node addons-608450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x8 over 4m26s)  kubelet          Node addons-608450 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m20s                  kubelet          Node addons-608450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s                  kubelet          Node addons-608450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s                  kubelet          Node addons-608450 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m9s                   node-controller  Node addons-608450 event: Registered Node addons-608450 in Controller
	  Normal  NodeReady                3m36s                  kubelet          Node addons-608450 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 dd 12 4c d8 c9 08 06
	[Jan 8 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 6d 57 11 12 77 08 06
	[  +5.602900] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 6a 81 ca 7d d0 08 06
	[  +0.000313] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff d2 d0 b1 89 1b 81 08 06
	[Jan 8 21:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 05 6f 41 57 92 08 06
	[  +0.000441] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 6d 57 11 12 77 08 06
	[Jan 8 22:54] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 02 7b 88 fd a0 9e 66 70 06 00 2d 08 00
	[  +1.007526] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 02 7b 88 fd a0 9e 66 70 06 00 2d 08 00
	[  +2.015830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 02 7b 88 fd a0 9e 66 70 06 00 2d 08 00
	[  +4.031682] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 02 7b 88 fd a0 9e 66 70 06 00 2d 08 00
	[  +8.195431] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 02 7b 88 fd a0 9e 66 70 06 00 2d 08 00
	[Jan 8 22:55] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 02 7b 88 fd a0 9e 66 70 06 00 2d 08 00
	[ +32.253758] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 02 7b 88 fd a0 9e 66 70 06 00 2d 08 00
	
	
	==> etcd [351900173d9b8b965293a6815a0971d822960e14a15eb3c84794e2837ea0cee2] <==
	{"level":"warn","ts":"2024-01-08T22:52:56.964705Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.799821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T22:52:57.144197Z","caller":"traceutil/trace.go:171","msg":"trace[891338269] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:429; }","duration":"295.290022ms","start":"2024-01-08T22:52:56.84888Z","end":"2024-01-08T22:52:57.14417Z","steps":["trace[891338269] 'agreement among raft nodes before linearized reading'  (duration: 115.755258ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:52:56.964815Z","caller":"traceutil/trace.go:171","msg":"trace[1517165721] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"117.353735ms","start":"2024-01-08T22:52:56.84745Z","end":"2024-01-08T22:52:56.964804Z","steps":["trace[1517165721] 'process raft request'  (duration: 117.034892ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:52:57.15401Z","caller":"traceutil/trace.go:171","msg":"trace[1654068709] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"100.154941ms","start":"2024-01-08T22:52:57.053835Z","end":"2024-01-08T22:52:57.15399Z","steps":["trace[1654068709] 'process raft request'  (duration: 94.748987ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:52:57.154904Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.986619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T22:52:57.15501Z","caller":"traceutil/trace.go:171","msg":"trace[583814986] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:0; response_revision:431; }","duration":"111.146897ms","start":"2024-01-08T22:52:57.04385Z","end":"2024-01-08T22:52:57.154997Z","steps":["trace[583814986] 'agreement among raft nodes before linearized reading'  (duration: 110.999962ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:52:57.469004Z","caller":"traceutil/trace.go:171","msg":"trace[891291573] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"105.054696ms","start":"2024-01-08T22:52:57.363924Z","end":"2024-01-08T22:52:57.468978Z","steps":["trace[891291573] 'process raft request'  (duration: 100.333957ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:52:57.548442Z","caller":"traceutil/trace.go:171","msg":"trace[1808174587] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"184.25795ms","start":"2024-01-08T22:52:57.36416Z","end":"2024-01-08T22:52:57.548418Z","steps":["trace[1808174587] 'process raft request'  (duration: 100.24992ms)","trace[1808174587] 'store kv pair into bolt db' {req_type:put; key:/registry/ranges/servicenodeports; req_size:336; } (duration: 83.842252ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:52:57.558329Z","caller":"traceutil/trace.go:171","msg":"trace[1984907898] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"189.64294ms","start":"2024-01-08T22:52:57.368662Z","end":"2024-01-08T22:52:57.558305Z","steps":["trace[1984907898] 'process raft request'  (duration: 184.01794ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:52:58.366942Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.408146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/local-path-storage/local-path-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T22:52:58.367013Z","caller":"traceutil/trace.go:171","msg":"trace[1138213633] range","detail":"{range_begin:/registry/deployments/local-path-storage/local-path-provisioner; range_end:; response_count:0; response_revision:515; }","duration":"101.489004ms","start":"2024-01-08T22:52:58.265509Z","end":"2024-01-08T22:52:58.366998Z","steps":["trace[1138213633] 'agreement among raft nodes before linearized reading'  (duration: 101.394531ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:53:41.055996Z","caller":"traceutil/trace.go:171","msg":"trace[309911179] transaction","detail":"{read_only:false; response_revision:989; number_of_response:1; }","duration":"100.782396ms","start":"2024-01-08T22:53:40.955196Z","end":"2024-01-08T22:53:41.055978Z","steps":["trace[309911179] 'compare'  (duration: 98.496848ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:02.173687Z","caller":"traceutil/trace.go:171","msg":"trace[2672326] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"128.732702ms","start":"2024-01-08T22:54:02.044926Z","end":"2024-01-08T22:54:02.173658Z","steps":["trace[2672326] 'process raft request'  (duration: 36.603723ms)","trace[2672326] 'compare'  (duration: 91.998204ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:54:02.173752Z","caller":"traceutil/trace.go:171","msg":"trace[1610350440] linearizableReadLoop","detail":"{readStateIndex:1134; appliedIndex:1133; }","duration":"106.921252ms","start":"2024-01-08T22:54:02.066802Z","end":"2024-01-08T22:54:02.173723Z","steps":["trace[1610350440] 'read index received'  (duration: 14.69074ms)","trace[1610350440] 'applied index is now lower than readState.Index'  (duration: 92.228137ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T22:54:02.173916Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.123267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:12021"}
	{"level":"info","ts":"2024-01-08T22:54:02.173959Z","caller":"traceutil/trace.go:171","msg":"trace[2122505428] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1103; }","duration":"107.185842ms","start":"2024-01-08T22:54:02.066763Z","end":"2024-01-08T22:54:02.173949Z","steps":["trace[2122505428] 'agreement among raft nodes before linearized reading'  (duration: 107.033644ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:08.093454Z","caller":"traceutil/trace.go:171","msg":"trace[1213597198] transaction","detail":"{read_only:false; response_revision:1153; number_of_response:1; }","duration":"116.250235ms","start":"2024-01-08T22:54:07.977183Z","end":"2024-01-08T22:54:08.093433Z","steps":["trace[1213597198] 'process raft request'  (duration: 116.097454ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:13.653391Z","caller":"traceutil/trace.go:171","msg":"trace[2084302745] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"116.855455ms","start":"2024-01-08T22:54:13.536506Z","end":"2024-01-08T22:54:13.653362Z","steps":["trace[2084302745] 'process raft request'  (duration: 55.546975ms)","trace[2084302745] 'compare'  (duration: 61.102903ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:54:13.65353Z","caller":"traceutil/trace.go:171","msg":"trace[2029920814] transaction","detail":"{read_only:false; response_revision:1209; number_of_response:1; }","duration":"114.158423ms","start":"2024-01-08T22:54:13.539359Z","end":"2024-01-08T22:54:13.653517Z","steps":["trace[2029920814] 'process raft request'  (duration: 113.934767ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:41.371811Z","caller":"traceutil/trace.go:171","msg":"trace[721882421] transaction","detail":"{read_only:false; response_revision:1455; number_of_response:1; }","duration":"116.777141ms","start":"2024-01-08T22:54:41.25501Z","end":"2024-01-08T22:54:41.371787Z","steps":["trace[721882421] 'process raft request'  (duration: 53.569223ms)","trace[721882421] 'compare'  (duration: 63.039042ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:54:41.372092Z","caller":"traceutil/trace.go:171","msg":"trace[1396023916] transaction","detail":"{read_only:false; response_revision:1456; number_of_response:1; }","duration":"117.060452ms","start":"2024-01-08T22:54:41.255011Z","end":"2024-01-08T22:54:41.372071Z","steps":["trace[1396023916] 'process raft request'  (duration: 116.72596ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:41.37221Z","caller":"traceutil/trace.go:171","msg":"trace[2128408377] transaction","detail":"{read_only:false; response_revision:1457; number_of_response:1; }","duration":"113.181444ms","start":"2024-01-08T22:54:41.259011Z","end":"2024-01-08T22:54:41.372193Z","steps":["trace[2128408377] 'process raft request'  (duration: 112.972178ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:41.497446Z","caller":"traceutil/trace.go:171","msg":"trace[133710064] transaction","detail":"{read_only:false; response_revision:1459; number_of_response:1; }","duration":"120.997158ms","start":"2024-01-08T22:54:41.376424Z","end":"2024-01-08T22:54:41.497422Z","steps":["trace[133710064] 'process raft request'  (duration: 120.064294ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:41.497459Z","caller":"traceutil/trace.go:171","msg":"trace[1737849745] transaction","detail":"{read_only:false; response_revision:1461; number_of_response:1; }","duration":"119.726446ms","start":"2024-01-08T22:54:41.377716Z","end":"2024-01-08T22:54:41.497443Z","steps":["trace[1737849745] 'process raft request'  (duration: 119.680002ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:41.497467Z","caller":"traceutil/trace.go:171","msg":"trace[1032535875] transaction","detail":"{read_only:false; response_revision:1460; number_of_response:1; }","duration":"121.01212ms","start":"2024-01-08T22:54:41.37644Z","end":"2024-01-08T22:54:41.497452Z","steps":["trace[1032535875] 'process raft request'  (duration: 120.911726ms)"],"step_count":1}
	
	
	==> gcp-auth [d8c2715ddb12ab833a4b01b5cb66640a1f392eff87e0d0e3e1ccb5e07e62b865] <==
	2024/01/08 22:54:11 GCP Auth Webhook started!
	2024/01/08 22:54:23 Ready to marshal response ...
	2024/01/08 22:54:23 Ready to write response ...
	2024/01/08 22:54:28 Ready to marshal response ...
	2024/01/08 22:54:28 Ready to write response ...
	2024/01/08 22:54:29 Ready to marshal response ...
	2024/01/08 22:54:29 Ready to write response ...
	2024/01/08 22:54:36 Ready to marshal response ...
	2024/01/08 22:54:36 Ready to write response ...
	2024/01/08 22:54:36 Ready to marshal response ...
	2024/01/08 22:54:36 Ready to write response ...
	2024/01/08 22:54:36 Ready to marshal response ...
	2024/01/08 22:54:36 Ready to write response ...
	2024/01/08 22:54:44 Ready to marshal response ...
	2024/01/08 22:54:44 Ready to write response ...
	2024/01/08 22:54:44 Ready to marshal response ...
	2024/01/08 22:54:44 Ready to write response ...
	2024/01/08 22:54:57 Ready to marshal response ...
	2024/01/08 22:54:57 Ready to write response ...
	2024/01/08 22:55:22 Ready to marshal response ...
	2024/01/08 22:55:22 Ready to write response ...
	2024/01/08 22:55:36 Ready to marshal response ...
	2024/01/08 22:55:36 Ready to write response ...
	2024/01/08 22:56:49 Ready to marshal response ...
	2024/01/08 22:56:49 Ready to write response ...
	
	
	==> kernel <==
	 22:57:00 up  3:39,  0 users,  load average: 0.29, 0.89, 0.49
	Linux addons-608450 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [081b971d00305ade0e2eb119ae229d5a89221047068ee78cf2cba1621fbae9bb] <==
	I0108 22:54:55.010727       1 main.go:227] handling current node
	I0108 22:55:05.022978       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:55:05.023001       1 main.go:227] handling current node
	I0108 22:55:15.027253       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:55:15.027312       1 main.go:227] handling current node
	I0108 22:55:25.047224       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:55:25.047252       1 main.go:227] handling current node
	I0108 22:55:35.051207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:55:35.051233       1 main.go:227] handling current node
	I0108 22:55:45.062899       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:55:45.062929       1 main.go:227] handling current node
	I0108 22:55:55.067598       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:55:55.067628       1 main.go:227] handling current node
	I0108 22:56:05.080015       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:56:05.080039       1 main.go:227] handling current node
	I0108 22:56:15.092798       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:56:15.092824       1 main.go:227] handling current node
	I0108 22:56:25.096769       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:56:25.096795       1 main.go:227] handling current node
	I0108 22:56:35.106779       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:56:35.106802       1 main.go:227] handling current node
	I0108 22:56:45.119333       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:56:45.119357       1 main.go:227] handling current node
	I0108 22:56:55.130691       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:56:55.130715       1 main.go:227] handling current node
	
	
	==> kube-apiserver [da6d5ed92f87d0ba66bbe03099a4df4f098b689c4f479e9472ab82da66d01161] <==
	I0108 22:54:36.191899       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.192.199"}
	I0108 22:54:41.780329       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0108 22:55:13.410897       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0108 22:55:33.947235       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0108 22:55:52.800615       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:55:52.800759       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:55:52.807590       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:55:52.807731       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:55:52.814828       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:55:52.814870       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:55:52.816522       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:55:52.816565       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:55:52.825049       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:55:52.825108       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:55:52.830266       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:55:52.830317       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:55:52.849712       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:55:52.849773       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:55:52.853310       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:55:52.853356       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0108 22:55:53.816875       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0108 22:55:53.853610       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0108 22:55:53.951566       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0108 22:56:49.991454       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.148.97"}
	E0108 22:56:52.254257       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [92a84b3acbbc013516d7cc0ccc88f2bff8da4a30851729bc3b0c4f7281162393] <==
	W0108 22:56:14.241823       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:56:14.241859       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 22:56:22.074105       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0108 22:56:22.074140       1 shared_informer.go:318] Caches are synced for resource quota
	I0108 22:56:22.496147       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0108 22:56:22.496192       1 shared_informer.go:318] Caches are synced for garbage collector
	W0108 22:56:27.833394       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:56:27.833426       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 22:56:30.934660       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:56:30.934691       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 22:56:37.759643       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:56:37.759673       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 22:56:48.124655       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:56:48.124688       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 22:56:49.815747       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0108 22:56:49.825824       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-vwxbm"
	I0108 22:56:49.831508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.924207ms"
	I0108 22:56:49.837666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.107775ms"
	I0108 22:56:49.837762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.853µs"
	I0108 22:56:49.844175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="106.257µs"
	I0108 22:56:51.636654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.254881ms"
	I0108 22:56:51.636761       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="59.672µs"
	I0108 22:56:52.176836       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0108 22:56:52.181290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="10.688µs"
	I0108 22:56:52.182904       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	
	==> kube-proxy [c1b297c84876a01cf180bca23f6b60410375817ffbaaeec1e8eebc39e9150041] <==
	I0108 22:52:56.664452       1 server_others.go:69] "Using iptables proxy"
	I0108 22:52:57.248713       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0108 22:52:58.150731       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 22:52:58.153425       1 server_others.go:152] "Using iptables Proxier"
	I0108 22:52:58.153553       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 22:52:58.153600       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 22:52:58.153660       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:52:58.153973       1 server.go:846] "Version info" version="v1.28.4"
	I0108 22:52:58.154399       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:52:58.155480       1 config.go:188] "Starting service config controller"
	I0108 22:52:58.157355       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:52:58.156184       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:52:58.156760       1 config.go:315] "Starting node config controller"
	I0108 22:52:58.157562       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:52:58.157542       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:52:58.259036       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:52:58.259346       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 22:52:58.259472       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6ce8407e1746cb01902d0ea0ec26e25dbb698922d646c16d907ef5fd58457116] <==
	W0108 22:52:37.064762       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:52:37.064776       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:52:37.064810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:52:37.064861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:52:37.064962       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 22:52:37.064976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:52:37.064982       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:52:37.064992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 22:52:37.065059       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:52:37.065072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 22:52:37.879296       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:52:37.879328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 22:52:37.902893       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:52:37.902933       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 22:52:37.988533       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 22:52:37.988563       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 22:52:38.043695       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:52:38.043733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:52:38.045974       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:52:38.046006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 22:52:38.189427       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:52:38.189469       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 22:52:38.236049       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 22:52:38.236079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0108 22:52:40.459088       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 22:56:50 addons-608450 kubelet[1550]: I0108 22:56:50.043581    1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b9a8b7d7-6c87-4d3a-8cba-72c4e220a268-gcp-creds\") pod \"hello-world-app-5d77478584-vwxbm\" (UID: \"b9a8b7d7-6c87-4d3a-8cba-72c4e220a268\") " pod="default/hello-world-app-5d77478584-vwxbm"
	Jan 08 22:56:50 addons-608450 kubelet[1550]: I0108 22:56:50.043644    1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74ts7\" (UniqueName: \"kubernetes.io/projected/b9a8b7d7-6c87-4d3a-8cba-72c4e220a268-kube-api-access-74ts7\") pod \"hello-world-app-5d77478584-vwxbm\" (UID: \"b9a8b7d7-6c87-4d3a-8cba-72c4e220a268\") " pod="default/hello-world-app-5d77478584-vwxbm"
	Jan 08 22:56:50 addons-608450 kubelet[1550]: W0108 22:56:50.484093    1550 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7910492989e553469ed5faf589502209db570b7f1bfa70d4a42c2985db3bb093/crio-65e9fa03cccba0fc11ff02f103715e89a3fc308ae6a34a0699d1a373c90b3243 WatchSource:0}: Error finding container 65e9fa03cccba0fc11ff02f103715e89a3fc308ae6a34a0699d1a373c90b3243: Status 404 returned error can't find the container with id 65e9fa03cccba0fc11ff02f103715e89a3fc308ae6a34a0699d1a373c90b3243
	Jan 08 22:56:51 addons-608450 kubelet[1550]: I0108 22:56:51.048867    1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq8c8\" (UniqueName: \"kubernetes.io/projected/945400d4-8f0f-428a-a8c3-9ec8ff72f174-kube-api-access-cq8c8\") pod \"945400d4-8f0f-428a-a8c3-9ec8ff72f174\" (UID: \"945400d4-8f0f-428a-a8c3-9ec8ff72f174\") "
	Jan 08 22:56:51 addons-608450 kubelet[1550]: I0108 22:56:51.050841    1550 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945400d4-8f0f-428a-a8c3-9ec8ff72f174-kube-api-access-cq8c8" (OuterVolumeSpecName: "kube-api-access-cq8c8") pod "945400d4-8f0f-428a-a8c3-9ec8ff72f174" (UID: "945400d4-8f0f-428a-a8c3-9ec8ff72f174"). InnerVolumeSpecName "kube-api-access-cq8c8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 22:56:51 addons-608450 kubelet[1550]: I0108 22:56:51.149310    1550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cq8c8\" (UniqueName: \"kubernetes.io/projected/945400d4-8f0f-428a-a8c3-9ec8ff72f174-kube-api-access-cq8c8\") on node \"addons-608450\" DevicePath \"\""
	Jan 08 22:56:51 addons-608450 kubelet[1550]: I0108 22:56:51.607401    1550 scope.go:117] "RemoveContainer" containerID="c1c1d0be2e1b1f47cc3abe50fd79718ae79df1a552d69513990fd58720bf89c2"
	Jan 08 22:56:51 addons-608450 kubelet[1550]: I0108 22:56:51.623722    1550 scope.go:117] "RemoveContainer" containerID="c1c1d0be2e1b1f47cc3abe50fd79718ae79df1a552d69513990fd58720bf89c2"
	Jan 08 22:56:51 addons-608450 kubelet[1550]: E0108 22:56:51.624208    1550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c1d0be2e1b1f47cc3abe50fd79718ae79df1a552d69513990fd58720bf89c2\": container with ID starting with c1c1d0be2e1b1f47cc3abe50fd79718ae79df1a552d69513990fd58720bf89c2 not found: ID does not exist" containerID="c1c1d0be2e1b1f47cc3abe50fd79718ae79df1a552d69513990fd58720bf89c2"
	Jan 08 22:56:51 addons-608450 kubelet[1550]: I0108 22:56:51.624274    1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c1d0be2e1b1f47cc3abe50fd79718ae79df1a552d69513990fd58720bf89c2"} err="failed to get container status \"c1c1d0be2e1b1f47cc3abe50fd79718ae79df1a552d69513990fd58720bf89c2\": rpc error: code = NotFound desc = could not find container \"c1c1d0be2e1b1f47cc3abe50fd79718ae79df1a552d69513990fd58720bf89c2\": container with ID starting with c1c1d0be2e1b1f47cc3abe50fd79718ae79df1a552d69513990fd58720bf89c2 not found: ID does not exist"
	Jan 08 22:56:51 addons-608450 kubelet[1550]: I0108 22:56:51.631045    1550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-vwxbm" podStartSLOduration=1.881843422 podCreationTimestamp="2024-01-08 22:56:49 +0000 UTC" firstStartedPulling="2024-01-08 22:56:50.487130876 +0000 UTC m=+250.341030328" lastFinishedPulling="2024-01-08 22:56:51.23629274 +0000 UTC m=+251.090192184" observedRunningTime="2024-01-08 22:56:51.630472377 +0000 UTC m=+251.484371838" watchObservedRunningTime="2024-01-08 22:56:51.631005278 +0000 UTC m=+251.484904738"
	Jan 08 22:56:52 addons-608450 kubelet[1550]: I0108 22:56:52.270430    1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6dd1da68-2b25-47dc-9452-213ebf95c570" path="/var/lib/kubelet/pods/6dd1da68-2b25-47dc-9452-213ebf95c570/volumes"
	Jan 08 22:56:52 addons-608450 kubelet[1550]: I0108 22:56:52.270772    1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="945400d4-8f0f-428a-a8c3-9ec8ff72f174" path="/var/lib/kubelet/pods/945400d4-8f0f-428a-a8c3-9ec8ff72f174/volumes"
	Jan 08 22:56:52 addons-608450 kubelet[1550]: I0108 22:56:52.271039    1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dba8729c-c5a7-4bcd-83c8-452cd3232f66" path="/var/lib/kubelet/pods/dba8729c-c5a7-4bcd-83c8-452cd3232f66/volumes"
	Jan 08 22:56:55 addons-608450 kubelet[1550]: I0108 22:56:55.477740    1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/894574e8-b36a-43c6-9bc5-951b3eb67e15-webhook-cert\") pod \"894574e8-b36a-43c6-9bc5-951b3eb67e15\" (UID: \"894574e8-b36a-43c6-9bc5-951b3eb67e15\") "
	Jan 08 22:56:55 addons-608450 kubelet[1550]: I0108 22:56:55.477814    1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnldb\" (UniqueName: \"kubernetes.io/projected/894574e8-b36a-43c6-9bc5-951b3eb67e15-kube-api-access-cnldb\") pod \"894574e8-b36a-43c6-9bc5-951b3eb67e15\" (UID: \"894574e8-b36a-43c6-9bc5-951b3eb67e15\") "
	Jan 08 22:56:55 addons-608450 kubelet[1550]: I0108 22:56:55.479671    1550 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/894574e8-b36a-43c6-9bc5-951b3eb67e15-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "894574e8-b36a-43c6-9bc5-951b3eb67e15" (UID: "894574e8-b36a-43c6-9bc5-951b3eb67e15"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 22:56:55 addons-608450 kubelet[1550]: I0108 22:56:55.479945    1550 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/894574e8-b36a-43c6-9bc5-951b3eb67e15-kube-api-access-cnldb" (OuterVolumeSpecName: "kube-api-access-cnldb") pod "894574e8-b36a-43c6-9bc5-951b3eb67e15" (UID: "894574e8-b36a-43c6-9bc5-951b3eb67e15"). InnerVolumeSpecName "kube-api-access-cnldb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 22:56:55 addons-608450 kubelet[1550]: I0108 22:56:55.578550    1550 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/894574e8-b36a-43c6-9bc5-951b3eb67e15-webhook-cert\") on node \"addons-608450\" DevicePath \"\""
	Jan 08 22:56:55 addons-608450 kubelet[1550]: I0108 22:56:55.578598    1550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cnldb\" (UniqueName: \"kubernetes.io/projected/894574e8-b36a-43c6-9bc5-951b3eb67e15-kube-api-access-cnldb\") on node \"addons-608450\" DevicePath \"\""
	Jan 08 22:56:55 addons-608450 kubelet[1550]: I0108 22:56:55.619685    1550 scope.go:117] "RemoveContainer" containerID="3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10"
	Jan 08 22:56:55 addons-608450 kubelet[1550]: I0108 22:56:55.634178    1550 scope.go:117] "RemoveContainer" containerID="3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10"
	Jan 08 22:56:55 addons-608450 kubelet[1550]: E0108 22:56:55.634638    1550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10\": container with ID starting with 3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10 not found: ID does not exist" containerID="3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10"
	Jan 08 22:56:55 addons-608450 kubelet[1550]: I0108 22:56:55.634697    1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10"} err="failed to get container status \"3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10\": rpc error: code = NotFound desc = could not find container \"3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10\": container with ID starting with 3da3eed6971eab24a54ea2d9be33afda797167abf9a1e2ef02686ed4b2507f10 not found: ID does not exist"
	Jan 08 22:56:56 addons-608450 kubelet[1550]: I0108 22:56:56.269611    1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="894574e8-b36a-43c6-9bc5-951b3eb67e15" path="/var/lib/kubelet/pods/894574e8-b36a-43c6-9bc5-951b3eb67e15/volumes"
	
	
	==> storage-provisioner [5b8c7d6c0c91f0e8e1efb577d1611bbd6d76708dedcde3e670e9f17e3368ac16] <==
	I0108 22:53:25.570378       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:53:25.578327       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:53:25.578378       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:53:25.584723       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:53:25.584911       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-608450_bf7188d2-4799-4fd6-8b9c-16240977eb58!
	I0108 22:53:25.584846       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cd861d8a-0507-443c-bee0-0727eeadb5de", APIVersion:"v1", ResourceVersion:"905", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-608450_bf7188d2-4799-4fd6-8b9c-16240977eb58 became leader
	I0108 22:53:25.685225       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-608450_bf7188d2-4799-4fd6-8b9c-16240977eb58!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-608450 -n addons-608450
helpers_test.go:261: (dbg) Run:  kubectl --context addons-608450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (9.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image load --daemon gcr.io/google-containers/addon-resizer:functional-688728 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-688728 image load --daemon gcr.io/google-containers/addon-resizer:functional-688728 --alsologtostderr: (7.581831399s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-688728 image ls: (2.296154179s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-688728" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (9.88s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (176.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-713577 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-713577 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.30344317s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-713577 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-713577 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [eaccec6b-0121-4724-9d9d-e60cc0f14c5b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [eaccec6b-0121-4724-9d9d-e60cc0f14c5b] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.003266263s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-713577 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0108 23:04:17.536938  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 23:04:45.223174  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-713577 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.346114346s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-713577 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-713577 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.012227076s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-713577 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-713577 addons disable ingress-dns --alsologtostderr -v=1: (1.70082523s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-713577 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-713577 addons disable ingress --alsologtostderr -v=1: (7.448489359s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-713577
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-713577:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ec06eaaf4a978e1c14eb65a9fcaf456fe806393b73f157f25a3e94a0f0a16625",
	        "Created": "2024-01-08T23:01:44.702984122Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 367039,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T23:01:44.99661072Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a68510052ff42a82cad4cbbd1f236376dac91176d14d2a924a5e2b18f7ff0a23",
	        "ResolvConfPath": "/var/lib/docker/containers/ec06eaaf4a978e1c14eb65a9fcaf456fe806393b73f157f25a3e94a0f0a16625/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ec06eaaf4a978e1c14eb65a9fcaf456fe806393b73f157f25a3e94a0f0a16625/hostname",
	        "HostsPath": "/var/lib/docker/containers/ec06eaaf4a978e1c14eb65a9fcaf456fe806393b73f157f25a3e94a0f0a16625/hosts",
	        "LogPath": "/var/lib/docker/containers/ec06eaaf4a978e1c14eb65a9fcaf456fe806393b73f157f25a3e94a0f0a16625/ec06eaaf4a978e1c14eb65a9fcaf456fe806393b73f157f25a3e94a0f0a16625-json.log",
	        "Name": "/ingress-addon-legacy-713577",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-713577:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-713577",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3394ff2676a6a50292975224b960d5ba5cc474d5f088113c1770af2343f96521-init/diff:/var/lib/docker/overlay2/5d41a77db4225bbdb2799c0759ad4432ee2e97ed824f853dc9d7fa3db67a2cbc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3394ff2676a6a50292975224b960d5ba5cc474d5f088113c1770af2343f96521/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3394ff2676a6a50292975224b960d5ba5cc474d5f088113c1770af2343f96521/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3394ff2676a6a50292975224b960d5ba5cc474d5f088113c1770af2343f96521/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-713577",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-713577/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-713577",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-713577",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-713577",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfbbc38d49501f6f85e81473603534764dd0ae3f98fc965f17683268391621d3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cfbbc38d4950",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-713577": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ec06eaaf4a97",
	                        "ingress-addon-legacy-713577"
	                    ],
	                    "NetworkID": "b78f7ce20fcfd1249006210315547a7bd232af1c94726d6000e2296ac87bfd77",
	                    "EndpointID": "7570d5689c5e027e000783a46a00f2d20a3b867a31f9919c948f9be4137fe8e2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-713577 -n ingress-addon-legacy-713577
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-713577 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-713577 logs -n 25: (1.121100791s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-688728                                                   | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3375465221/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-688728                                                   | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3375465221/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| update-context | functional-688728                                                      | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| service        | functional-688728 service                                              | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | --namespace=default --https                                            |                             |         |         |                     |                     |
	|                | --url hello-node                                                       |                             |         |         |                     |                     |
	| image          | functional-688728                                                      | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-688728 ssh findmnt                                          | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| image          | functional-688728                                                      | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-688728 ssh findmnt                                          | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| service        | functional-688728                                                      | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | service hello-node --url                                               |                             |         |         |                     |                     |
	|                | --format={{.IP}}                                                       |                             |         |         |                     |                     |
	| image          | functional-688728                                                      | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-688728 ssh findmnt                                          | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| image          | functional-688728                                                      | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| mount          | -p functional-688728                                                   | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| service        | functional-688728 service                                              | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | hello-node --url                                                       |                             |         |         |                     |                     |
	| ssh            | functional-688728 ssh pgrep                                            | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-688728 image build -t                                       | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	|                | localhost/my-image:functional-688728                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-688728 image ls                                             | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	| delete         | -p functional-688728                                                   | functional-688728           | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:01 UTC |
	| start          | -p ingress-addon-legacy-713577                                         | ingress-addon-legacy-713577 | jenkins | v1.32.0 | 08 Jan 24 23:01 UTC | 08 Jan 24 23:02 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-713577                                            | ingress-addon-legacy-713577 | jenkins | v1.32.0 | 08 Jan 24 23:02 UTC | 08 Jan 24 23:02 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-713577                                            | ingress-addon-legacy-713577 | jenkins | v1.32.0 | 08 Jan 24 23:02 UTC | 08 Jan 24 23:02 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-713577                                            | ingress-addon-legacy-713577 | jenkins | v1.32.0 | 08 Jan 24 23:03 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-713577 ip                                         | ingress-addon-legacy-713577 | jenkins | v1.32.0 | 08 Jan 24 23:05 UTC | 08 Jan 24 23:05 UTC |
	| addons         | ingress-addon-legacy-713577                                            | ingress-addon-legacy-713577 | jenkins | v1.32.0 | 08 Jan 24 23:05 UTC | 08 Jan 24 23:05 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-713577                                            | ingress-addon-legacy-713577 | jenkins | v1.32.0 | 08 Jan 24 23:05 UTC | 08 Jan 24 23:05 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 23:01:31
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 23:01:31.827298  366443 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:01:31.827600  366443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:01:31.827610  366443 out.go:309] Setting ErrFile to fd 2...
	I0108 23:01:31.827614  366443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:01:31.827808  366443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	I0108 23:01:31.828439  366443 out.go:303] Setting JSON to false
	I0108 23:01:31.829445  366443 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13424,"bootTime":1704741468,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:01:31.829512  366443 start.go:138] virtualization: kvm guest
	I0108 23:01:31.832343  366443 out.go:177] * [ingress-addon-legacy-713577] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:01:31.834344  366443 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:01:31.834339  366443 notify.go:220] Checking for updates...
	I0108 23:01:31.836488  366443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:01:31.838494  366443 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:01:31.840537  366443 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	I0108 23:01:31.842442  366443 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:01:31.844155  366443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:01:31.846074  366443 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:01:31.872284  366443 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 23:01:31.872406  366443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:01:31.925340  366443 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-08 23:01:31.916152854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:01:31.925447  366443 docker.go:295] overlay module found
	I0108 23:01:31.927723  366443 out.go:177] * Using the docker driver based on user configuration
	I0108 23:01:31.929479  366443 start.go:298] selected driver: docker
	I0108 23:01:31.929500  366443 start.go:902] validating driver "docker" against <nil>
	I0108 23:01:31.929513  366443 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:01:31.930345  366443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:01:31.985069  366443 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-08 23:01:31.976407844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:01:31.985245  366443 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 23:01:31.985465  366443 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 23:01:31.987506  366443 out.go:177] * Using Docker driver with root privileges
	I0108 23:01:31.989100  366443 cni.go:84] Creating CNI manager for ""
	I0108 23:01:31.989121  366443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 23:01:31.989132  366443 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 23:01:31.989143  366443 start_flags.go:323] config:
	{Name:ingress-addon-legacy-713577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-713577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:01:31.991095  366443 out.go:177] * Starting control plane node ingress-addon-legacy-713577 in cluster ingress-addon-legacy-713577
	I0108 23:01:31.992809  366443 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 23:01:31.994393  366443 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0108 23:01:31.995934  366443 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 23:01:31.995977  366443 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0108 23:01:32.012427  366443 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon, skipping pull
	I0108 23:01:32.012460  366443 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in daemon, skipping load
	I0108 23:01:32.018485  366443 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 23:01:32.018514  366443 cache.go:56] Caching tarball of preloaded images
	I0108 23:01:32.018697  366443 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 23:01:32.020717  366443 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0108 23:01:32.022255  366443 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 23:01:32.044218  366443 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 23:01:36.397572  366443 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 23:01:36.397676  366443 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 23:01:37.411167  366443 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0108 23:01:37.411554  366443 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/config.json ...
	I0108 23:01:37.411588  366443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/config.json: {Name:mk6cf980626e954a46bb84fa9127ea5eb22b2d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:01:37.411783  366443 cache.go:194] Successfully downloaded all kic artifacts
	I0108 23:01:37.411831  366443 start.go:365] acquiring machines lock for ingress-addon-legacy-713577: {Name:mkcd54b3edf4d21b6430c15bd0c0600f7b22202c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:01:37.411905  366443 start.go:369] acquired machines lock for "ingress-addon-legacy-713577" in 52.831µs
	I0108 23:01:37.411932  366443 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-713577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-713577 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 23:01:37.412038  366443 start.go:125] createHost starting for "" (driver="docker")
	I0108 23:01:37.414736  366443 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0108 23:01:37.414979  366443 start.go:159] libmachine.API.Create for "ingress-addon-legacy-713577" (driver="docker")
	I0108 23:01:37.415012  366443 client.go:168] LocalClient.Create starting
	I0108 23:01:37.415099  366443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem
	I0108 23:01:37.415148  366443 main.go:141] libmachine: Decoding PEM data...
	I0108 23:01:37.415166  366443 main.go:141] libmachine: Parsing certificate...
	I0108 23:01:37.415223  366443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem
	I0108 23:01:37.415243  366443 main.go:141] libmachine: Decoding PEM data...
	I0108 23:01:37.415252  366443 main.go:141] libmachine: Parsing certificate...
	I0108 23:01:37.415590  366443 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-713577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 23:01:37.434245  366443 cli_runner.go:211] docker network inspect ingress-addon-legacy-713577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 23:01:37.434345  366443 network_create.go:281] running [docker network inspect ingress-addon-legacy-713577] to gather additional debugging logs...
	I0108 23:01:37.434364  366443 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-713577
	W0108 23:01:37.450959  366443 cli_runner.go:211] docker network inspect ingress-addon-legacy-713577 returned with exit code 1
	I0108 23:01:37.450996  366443 network_create.go:284] error running [docker network inspect ingress-addon-legacy-713577]: docker network inspect ingress-addon-legacy-713577: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-713577 not found
	I0108 23:01:37.451015  366443 network_create.go:286] output of [docker network inspect ingress-addon-legacy-713577]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-713577 not found
	
	** /stderr **
	I0108 23:01:37.451138  366443 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 23:01:37.468041  366443 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005a4ca0}
	I0108 23:01:37.468089  366443 network_create.go:124] attempt to create docker network ingress-addon-legacy-713577 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 23:01:37.468142  366443 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-713577 ingress-addon-legacy-713577
	I0108 23:01:37.524825  366443 network_create.go:108] docker network ingress-addon-legacy-713577 192.168.49.0/24 created
	I0108 23:01:37.524869  366443 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-713577" container
	I0108 23:01:37.524944  366443 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 23:01:37.540786  366443 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-713577 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-713577 --label created_by.minikube.sigs.k8s.io=true
	I0108 23:01:37.558471  366443 oci.go:103] Successfully created a docker volume ingress-addon-legacy-713577
	I0108 23:01:37.558584  366443 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-713577-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-713577 --entrypoint /usr/bin/test -v ingress-addon-legacy-713577:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib
	I0108 23:01:39.290478  366443 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-713577-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-713577 --entrypoint /usr/bin/test -v ingress-addon-legacy-713577:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib: (1.731849034s)
	I0108 23:01:39.290519  366443 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-713577
	I0108 23:01:39.290551  366443 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 23:01:39.290577  366443 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 23:01:39.290660  366443 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-713577:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 23:01:44.635999  366443 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-713577:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir: (5.345294095s)
	I0108 23:01:44.636045  366443 kic.go:203] duration metric: took 5.345464 seconds to extract preloaded images to volume
	W0108 23:01:44.636178  366443 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 23:01:44.636269  366443 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 23:01:44.687331  366443 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-713577 --name ingress-addon-legacy-713577 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-713577 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-713577 --network ingress-addon-legacy-713577 --ip 192.168.49.2 --volume ingress-addon-legacy-713577:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0108 23:01:45.007178  366443 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-713577 --format={{.State.Running}}
	I0108 23:01:45.025776  366443 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-713577 --format={{.State.Status}}
	I0108 23:01:45.042998  366443 cli_runner.go:164] Run: docker exec ingress-addon-legacy-713577 stat /var/lib/dpkg/alternatives/iptables
	I0108 23:01:45.098995  366443 oci.go:144] the created container "ingress-addon-legacy-713577" has a running status.
	I0108 23:01:45.099043  366443 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/ingress-addon-legacy-713577/id_rsa...
	I0108 23:01:45.237339  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/ingress-addon-legacy-713577/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 23:01:45.237391  366443 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17830-321683/.minikube/machines/ingress-addon-legacy-713577/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 23:01:45.257904  366443 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-713577 --format={{.State.Status}}
	I0108 23:01:45.275001  366443 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 23:01:45.275022  366443 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-713577 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 23:01:45.340506  366443 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-713577 --format={{.State.Status}}
	I0108 23:01:45.361397  366443 machine.go:88] provisioning docker machine ...
	I0108 23:01:45.361435  366443 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-713577"
	I0108 23:01:45.361498  366443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-713577
	I0108 23:01:45.379230  366443 main.go:141] libmachine: Using SSH client type: native
	I0108 23:01:45.379866  366443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0108 23:01:45.379899  366443 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-713577 && echo "ingress-addon-legacy-713577" | sudo tee /etc/hostname
	I0108 23:01:45.380652  366443 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35330->127.0.0.1:33089: read: connection reset by peer
	I0108 23:01:48.526578  366443 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-713577
	
	I0108 23:01:48.526694  366443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-713577
	I0108 23:01:48.542729  366443 main.go:141] libmachine: Using SSH client type: native
	I0108 23:01:48.543075  366443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0108 23:01:48.543095  366443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-713577' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-713577/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-713577' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:01:48.675490  366443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:01:48.675528  366443 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-321683/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-321683/.minikube}
	I0108 23:01:48.675575  366443 ubuntu.go:177] setting up certificates
	I0108 23:01:48.675590  366443 provision.go:83] configureAuth start
	I0108 23:01:48.675656  366443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-713577
	I0108 23:01:48.691588  366443 provision.go:138] copyHostCerts
	I0108 23:01:48.691634  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem
	I0108 23:01:48.691677  366443 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem, removing ...
	I0108 23:01:48.691689  366443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem
	I0108 23:01:48.691764  366443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem (1082 bytes)
	I0108 23:01:48.691892  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem
	I0108 23:01:48.691925  366443 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem, removing ...
	I0108 23:01:48.691935  366443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem
	I0108 23:01:48.691979  366443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem (1123 bytes)
	I0108 23:01:48.692055  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem
	I0108 23:01:48.692077  366443 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem, removing ...
	I0108 23:01:48.692083  366443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem
	I0108 23:01:48.692118  366443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem (1679 bytes)
	I0108 23:01:48.692181  366443 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-713577 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-713577]
	I0108 23:01:48.775057  366443 provision.go:172] copyRemoteCerts
	I0108 23:01:48.775144  366443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:01:48.775205  366443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-713577
	I0108 23:01:48.791784  366443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/ingress-addon-legacy-713577/id_rsa Username:docker}
	I0108 23:01:48.888007  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 23:01:48.888087  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:01:48.911038  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 23:01:48.911105  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0108 23:01:48.933069  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 23:01:48.933125  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:01:48.954387  366443 provision.go:86] duration metric: configureAuth took 278.782261ms
	I0108 23:01:48.954443  366443 ubuntu.go:193] setting minikube options for container-runtime
	I0108 23:01:48.954641  366443 config.go:182] Loaded profile config "ingress-addon-legacy-713577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 23:01:48.954744  366443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-713577
	I0108 23:01:48.971315  366443 main.go:141] libmachine: Using SSH client type: native
	I0108 23:01:48.971691  366443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0108 23:01:48.971709  366443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:01:49.220531  366443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:01:49.220564  366443 machine.go:91] provisioned docker machine in 3.859139619s
	I0108 23:01:49.220574  366443 client.go:171] LocalClient.Create took 11.805555554s
	I0108 23:01:49.220625  366443 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-713577" took 11.80564733s
	I0108 23:01:49.220635  366443 start.go:300] post-start starting for "ingress-addon-legacy-713577" (driver="docker")
	I0108 23:01:49.220646  366443 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:01:49.220700  366443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:01:49.220736  366443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-713577
	I0108 23:01:49.238436  366443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/ingress-addon-legacy-713577/id_rsa Username:docker}
	I0108 23:01:49.336592  366443 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:01:49.339785  366443 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 23:01:49.339824  366443 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 23:01:49.339835  366443 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 23:01:49.339844  366443 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 23:01:49.339858  366443 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/addons for local assets ...
	I0108 23:01:49.339910  366443 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/files for local assets ...
	I0108 23:01:49.340053  366443 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem -> 3283842.pem in /etc/ssl/certs
	I0108 23:01:49.340070  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem -> /etc/ssl/certs/3283842.pem
	I0108 23:01:49.340177  366443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:01:49.348404  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem --> /etc/ssl/certs/3283842.pem (1708 bytes)
	I0108 23:01:49.371152  366443 start.go:303] post-start completed in 150.501664ms
	I0108 23:01:49.371548  366443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-713577
	I0108 23:01:49.388250  366443 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/config.json ...
	I0108 23:01:49.388526  366443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:01:49.388583  366443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-713577
	I0108 23:01:49.405659  366443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/ingress-addon-legacy-713577/id_rsa Username:docker}
	I0108 23:01:49.500279  366443 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 23:01:49.504722  366443 start.go:128] duration metric: createHost completed in 12.092667s
	I0108 23:01:49.504747  366443 start.go:83] releasing machines lock for "ingress-addon-legacy-713577", held for 12.092828228s
	I0108 23:01:49.504816  366443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-713577
	I0108 23:01:49.521243  366443 ssh_runner.go:195] Run: cat /version.json
	I0108 23:01:49.521261  366443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:01:49.521304  366443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-713577
	I0108 23:01:49.521327  366443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-713577
	I0108 23:01:49.538902  366443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/ingress-addon-legacy-713577/id_rsa Username:docker}
	I0108 23:01:49.540328  366443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/ingress-addon-legacy-713577/id_rsa Username:docker}
	I0108 23:01:49.720938  366443 ssh_runner.go:195] Run: systemctl --version
	I0108 23:01:49.725460  366443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:01:49.864278  366443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:01:49.868947  366443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:01:49.888146  366443 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 23:01:49.888242  366443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:01:49.916713  366443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 23:01:49.916739  366443 start.go:475] detecting cgroup driver to use...
	I0108 23:01:49.916772  366443 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 23:01:49.916814  366443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:01:49.931853  366443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:01:49.943057  366443 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:01:49.943160  366443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:01:49.956422  366443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:01:49.969759  366443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 23:01:50.049956  366443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:01:50.126236  366443 docker.go:219] disabling docker service ...
	I0108 23:01:50.126295  366443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:01:50.144327  366443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:01:50.154969  366443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:01:50.237903  366443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:01:50.324203  366443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:01:50.335320  366443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:01:50.350442  366443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 23:01:50.350523  366443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:01:50.360008  366443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 23:01:50.360075  366443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:01:50.369917  366443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:01:50.379097  366443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:01:50.388711  366443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 23:01:50.397315  366443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 23:01:50.405384  366443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 23:01:50.413012  366443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 23:01:50.488593  366443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 23:01:50.591953  366443 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 23:01:50.592043  366443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 23:01:50.595558  366443 start.go:543] Will wait 60s for crictl version
	I0108 23:01:50.595624  366443 ssh_runner.go:195] Run: which crictl
	I0108 23:01:50.598705  366443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 23:01:50.634284  366443 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 23:01:50.634359  366443 ssh_runner.go:195] Run: crio --version
	I0108 23:01:50.669652  366443 ssh_runner.go:195] Run: crio --version
	I0108 23:01:50.705229  366443 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0108 23:01:50.706842  366443 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-713577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 23:01:50.723499  366443 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 23:01:50.727325  366443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:01:50.737897  366443 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 23:01:50.737949  366443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:01:50.783890  366443 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 23:01:50.783953  366443 ssh_runner.go:195] Run: which lz4
	I0108 23:01:50.787576  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 23:01:50.787685  366443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 23:01:50.791078  366443 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 23:01:50.791114  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0108 23:01:51.708861  366443 crio.go:444] Took 0.921199 seconds to copy over tarball
	I0108 23:01:51.708927  366443 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 23:01:54.003447  366443 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.294491394s)
	I0108 23:01:54.003475  366443 crio.go:451] Took 2.294587 seconds to extract the tarball
	I0108 23:01:54.003484  366443 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 23:01:54.075413  366443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:01:54.107760  366443 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 23:01:54.107787  366443 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 23:01:54.107876  366443 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 23:01:54.107914  366443 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0108 23:01:54.107919  366443 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 23:01:54.107931  366443 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 23:01:54.107938  366443 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 23:01:54.108006  366443 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 23:01:54.107912  366443 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 23:01:54.107864  366443 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:01:54.109216  366443 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0108 23:01:54.109235  366443 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 23:01:54.109245  366443 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 23:01:54.109234  366443 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:01:54.109209  366443 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 23:01:54.109286  366443 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 23:01:54.109336  366443 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 23:01:54.109353  366443 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 23:01:54.271907  366443 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0108 23:01:54.281952  366443 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0108 23:01:54.294879  366443 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0108 23:01:54.298740  366443 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0108 23:01:54.301922  366443 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0108 23:01:54.306039  366443 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0108 23:01:54.313174  366443 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0108 23:01:54.313319  366443 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0108 23:01:54.313385  366443 ssh_runner.go:195] Run: which crictl
	I0108 23:01:54.319436  366443 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 23:01:54.353629  366443 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0108 23:01:54.353679  366443 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 23:01:54.353727  366443 ssh_runner.go:195] Run: which crictl
	I0108 23:01:54.369069  366443 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0108 23:01:54.369128  366443 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0108 23:01:54.369168  366443 ssh_runner.go:195] Run: which crictl
	I0108 23:01:54.373809  366443 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0108 23:01:54.373862  366443 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0108 23:01:54.373866  366443 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 23:01:54.373900  366443 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0108 23:01:54.373921  366443 ssh_runner.go:195] Run: which crictl
	I0108 23:01:54.373941  366443 ssh_runner.go:195] Run: which crictl
	I0108 23:01:54.375671  366443 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0108 23:01:54.375725  366443 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0108 23:01:54.375756  366443 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 23:01:54.375802  366443 ssh_runner.go:195] Run: which crictl
	I0108 23:01:54.403217  366443 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:01:54.452779  366443 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0108 23:01:54.452827  366443 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 23:01:54.452854  366443 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0108 23:01:54.452901  366443 ssh_runner.go:195] Run: which crictl
	I0108 23:01:54.452928  366443 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0108 23:01:54.452854  366443 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0108 23:01:54.452933  366443 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0108 23:01:54.545117  366443 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0108 23:01:54.545141  366443 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0108 23:01:54.662599  366443 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0108 23:01:54.662682  366443 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 23:01:54.662718  366443 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0108 23:01:54.662791  366443 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0108 23:01:54.662859  366443 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0108 23:01:54.662890  366443 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0108 23:01:54.694217  366443 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0108 23:01:54.694282  366443 cache_images.go:92] LoadImages completed in 586.480079ms
	W0108 23:01:54.694373  366443 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0108 23:01:54.694450  366443 ssh_runner.go:195] Run: crio config
	I0108 23:01:54.735685  366443 cni.go:84] Creating CNI manager for ""
	I0108 23:01:54.735706  366443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 23:01:54.735727  366443 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 23:01:54.735751  366443 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-713577 NodeName:ingress-addon-legacy-713577 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 23:01:54.735909  366443 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-713577"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 23:01:54.735994  366443 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-713577 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-713577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 23:01:54.736056  366443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0108 23:01:54.744339  366443 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 23:01:54.744397  366443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 23:01:54.752636  366443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0108 23:01:54.768617  366443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0108 23:01:54.784634  366443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0108 23:01:54.800245  366443 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 23:01:54.803552  366443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:01:54.813290  366443 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577 for IP: 192.168.49.2
	I0108 23:01:54.813332  366443 certs.go:190] acquiring lock for shared ca certs: {Name:mka0fb25b2b3d7c6ea0a3bf3a5eb1e0289391c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:01:54.813473  366443 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key
	I0108 23:01:54.813525  366443 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key
	I0108 23:01:54.813578  366443 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.key
	I0108 23:01:54.813595  366443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt with IP's: []
	I0108 23:01:55.102139  366443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt ...
	I0108 23:01:55.102177  366443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: {Name:mk4416311350c7d45b23c29654af297858154477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:01:55.102355  366443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.key ...
	I0108 23:01:55.102381  366443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.key: {Name:mkd587a71677b884c7d942adf005e4876fd7f503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:01:55.102457  366443 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.key.dd3b5fb2
	I0108 23:01:55.102472  366443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 23:01:55.172365  366443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.crt.dd3b5fb2 ...
	I0108 23:01:55.172398  366443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.crt.dd3b5fb2: {Name:mk15d9c18dec38b51f14b4342d87fee72be69a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:01:55.172560  366443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.key.dd3b5fb2 ...
	I0108 23:01:55.172574  366443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.key.dd3b5fb2: {Name:mkc13646e069a3518ebd1252fe63ce4eca61b253 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:01:55.172636  366443 certs.go:337] copying /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.crt
	I0108 23:01:55.172736  366443 certs.go:341] copying /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.key
	I0108 23:01:55.172795  366443 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/proxy-client.key
	I0108 23:01:55.172811  366443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/proxy-client.crt with IP's: []
	I0108 23:01:55.330073  366443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/proxy-client.crt ...
	I0108 23:01:55.330110  366443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/proxy-client.crt: {Name:mke4570347fc2f4ed6166f8bc3ba00dc297403a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:01:55.330299  366443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/proxy-client.key ...
	I0108 23:01:55.330313  366443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/proxy-client.key: {Name:mk57c93d5d82e622862a76ecd95b62b9ba5d2401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:01:55.330376  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 23:01:55.330401  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 23:01:55.330414  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 23:01:55.330429  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 23:01:55.330445  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 23:01:55.330458  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 23:01:55.330470  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 23:01:55.330482  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 23:01:55.330529  366443 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384.pem (1338 bytes)
	W0108 23:01:55.330562  366443 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384_empty.pem, impossibly tiny 0 bytes
	I0108 23:01:55.330573  366443 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 23:01:55.330595  366443 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem (1082 bytes)
	I0108 23:01:55.330620  366443 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem (1123 bytes)
	I0108 23:01:55.330651  366443 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem (1679 bytes)
	I0108 23:01:55.330701  366443 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem (1708 bytes)
	I0108 23:01:55.330731  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384.pem -> /usr/share/ca-certificates/328384.pem
	I0108 23:01:55.330747  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem -> /usr/share/ca-certificates/3283842.pem
	I0108 23:01:55.330759  366443 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:01:55.331363  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 23:01:55.353583  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 23:01:55.375109  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 23:01:55.396990  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 23:01:55.418546  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 23:01:55.440247  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 23:01:55.461775  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 23:01:55.483229  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 23:01:55.504243  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384.pem --> /usr/share/ca-certificates/328384.pem (1338 bytes)
	I0108 23:01:55.525757  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem --> /usr/share/ca-certificates/3283842.pem (1708 bytes)
	I0108 23:01:55.547068  366443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 23:01:55.568996  366443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 23:01:55.584415  366443 ssh_runner.go:195] Run: openssl version
	I0108 23:01:55.589699  366443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3283842.pem && ln -fs /usr/share/ca-certificates/3283842.pem /etc/ssl/certs/3283842.pem"
	I0108 23:01:55.598607  366443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3283842.pem
	I0108 23:01:55.602328  366443 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 22:58 /usr/share/ca-certificates/3283842.pem
	I0108 23:01:55.602405  366443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3283842.pem
	I0108 23:01:55.609005  366443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3283842.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 23:01:55.617828  366443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 23:01:55.626675  366443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:01:55.630205  366443 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:01:55.630275  366443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:01:55.636752  366443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 23:01:55.645327  366443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/328384.pem && ln -fs /usr/share/ca-certificates/328384.pem /etc/ssl/certs/328384.pem"
	I0108 23:01:55.653720  366443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/328384.pem
	I0108 23:01:55.657014  366443 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 22:58 /usr/share/ca-certificates/328384.pem
	I0108 23:01:55.657082  366443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/328384.pem
	I0108 23:01:55.663350  366443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/328384.pem /etc/ssl/certs/51391683.0"
	I0108 23:01:55.671779  366443 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 23:01:55.674712  366443 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:01:55.674759  366443 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-713577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-713577 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:01:55.674832  366443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 23:01:55.674882  366443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 23:01:55.706993  366443 cri.go:89] found id: ""
	I0108 23:01:55.707052  366443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 23:01:55.715214  366443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 23:01:55.723163  366443 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 23:01:55.723252  366443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 23:01:55.731022  366443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 23:01:55.731073  366443 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 23:01:55.773152  366443 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0108 23:01:55.773223  366443 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 23:01:55.812587  366443 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 23:01:55.812694  366443 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 23:01:55.812783  366443 kubeadm.go:322] OS: Linux
	I0108 23:01:55.812854  366443 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 23:01:55.812919  366443 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 23:01:55.812979  366443 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 23:01:55.813047  366443 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 23:01:55.813124  366443 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 23:01:55.813202  366443 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 23:01:55.882048  366443 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 23:01:55.882190  366443 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 23:01:55.882312  366443 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 23:01:56.066051  366443 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 23:01:56.066968  366443 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 23:01:56.067046  366443 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 23:01:56.146828  366443 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 23:01:56.150714  366443 out.go:204]   - Generating certificates and keys ...
	I0108 23:01:56.150832  366443 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 23:01:56.150922  366443 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 23:01:56.316246  366443 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 23:01:56.713480  366443 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 23:01:56.787616  366443 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 23:01:56.841020  366443 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 23:01:57.188090  366443 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 23:01:57.188348  366443 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-713577 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 23:01:57.354381  366443 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 23:01:57.354554  366443 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-713577 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 23:01:57.411386  366443 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 23:01:57.463372  366443 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 23:01:57.588213  366443 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 23:01:57.588341  366443 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 23:01:57.680261  366443 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 23:01:57.857550  366443 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 23:01:58.002519  366443 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 23:01:58.156380  366443 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 23:01:58.156975  366443 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 23:01:58.158996  366443 out.go:204]   - Booting up control plane ...
	I0108 23:01:58.159087  366443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 23:01:58.163643  366443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 23:01:58.164635  366443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 23:01:58.165284  366443 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 23:01:58.167233  366443 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 23:02:04.670370  366443 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503059 seconds
	I0108 23:02:04.670675  366443 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 23:02:04.681725  366443 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 23:02:05.197380  366443 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 23:02:05.197520  366443 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-713577 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 23:02:05.704569  366443 kubeadm.go:322] [bootstrap-token] Using token: qvdvgs.80kdvr42wmz1mccy
	I0108 23:02:05.706044  366443 out.go:204]   - Configuring RBAC rules ...
	I0108 23:02:05.706216  366443 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 23:02:05.709560  366443 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 23:02:05.715132  366443 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 23:02:05.717145  366443 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 23:02:05.718905  366443 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 23:02:05.720633  366443 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 23:02:05.728393  366443 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 23:02:05.969121  366443 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 23:02:06.120452  366443 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 23:02:06.121558  366443 kubeadm.go:322] 
	I0108 23:02:06.121655  366443 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 23:02:06.121667  366443 kubeadm.go:322] 
	I0108 23:02:06.121779  366443 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 23:02:06.121799  366443 kubeadm.go:322] 
	I0108 23:02:06.121834  366443 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 23:02:06.121915  366443 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 23:02:06.121996  366443 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 23:02:06.122011  366443 kubeadm.go:322] 
	I0108 23:02:06.122086  366443 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 23:02:06.122170  366443 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 23:02:06.122253  366443 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 23:02:06.122264  366443 kubeadm.go:322] 
	I0108 23:02:06.122339  366443 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 23:02:06.122461  366443 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 23:02:06.122475  366443 kubeadm.go:322] 
	I0108 23:02:06.122585  366443 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qvdvgs.80kdvr42wmz1mccy \
	I0108 23:02:06.122730  366443 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1f39001c59ef0575e1bc9780ffa85edc1681f20574bd7111835c2c8563c3cd8d \
	I0108 23:02:06.122766  366443 kubeadm.go:322]     --control-plane 
	I0108 23:02:06.122774  366443 kubeadm.go:322] 
	I0108 23:02:06.122892  366443 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 23:02:06.122900  366443 kubeadm.go:322] 
	I0108 23:02:06.122969  366443 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qvdvgs.80kdvr42wmz1mccy \
	I0108 23:02:06.123089  366443 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1f39001c59ef0575e1bc9780ffa85edc1681f20574bd7111835c2c8563c3cd8d 
	I0108 23:02:06.124706  366443 kubeadm.go:322] W0108 23:01:55.772690    1384 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 23:02:06.124895  366443 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 23:02:06.125015  366443 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 23:02:06.125150  366443 kubeadm.go:322] W0108 23:01:58.163355    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 23:02:06.125273  366443 kubeadm.go:322] W0108 23:01:58.164381    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 23:02:06.125302  366443 cni.go:84] Creating CNI manager for ""
	I0108 23:02:06.125314  366443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 23:02:06.127144  366443 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 23:02:06.128481  366443 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 23:02:06.132312  366443 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0108 23:02:06.132332  366443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 23:02:06.148532  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 23:02:06.568166  366443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 23:02:06.568272  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:06.568274  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=ingress-addon-legacy-713577 minikube.k8s.io/updated_at=2024_01_08T23_02_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:06.685219  366443 ops.go:34] apiserver oom_adj: -16
	I0108 23:02:06.685225  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:07.185275  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:07.686109  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:08.185747  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:08.685762  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:09.185497  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:09.685674  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:10.185742  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:10.685963  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:11.185515  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:11.686227  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:12.185625  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:12.686052  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:13.186223  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:13.686186  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:14.186018  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:14.685589  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:15.186242  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:15.685531  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:16.185390  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:16.686147  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:17.186070  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:17.686279  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:18.185921  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:18.686121  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:19.185695  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:19.686256  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:20.185680  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:20.685879  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:21.186007  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:21.685862  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:22.185955  366443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:02:22.253085  366443 kubeadm.go:1088] duration metric: took 15.684883084s to wait for elevateKubeSystemPrivileges.
	I0108 23:02:22.253123  366443 kubeadm.go:406] StartCluster complete in 26.578367321s
	I0108 23:02:22.253148  366443 settings.go:142] acquiring lock: {Name:mkc902113864abc3d31cd188d3cc72ba1bd92615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:02:22.253253  366443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:02:22.254419  366443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/kubeconfig: {Name:mkc128765c68b9b4bae543227dc1d65bab54c68e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:02:22.254719  366443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 23:02:22.254864  366443 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 23:02:22.254968  366443 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-713577"
	I0108 23:02:22.254980  366443 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-713577"
	I0108 23:02:22.255002  366443 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-713577"
	I0108 23:02:22.255007  366443 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-713577"
	I0108 23:02:22.255037  366443 config.go:182] Loaded profile config "ingress-addon-legacy-713577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 23:02:22.255079  366443 host.go:66] Checking if "ingress-addon-legacy-713577" exists ...
	I0108 23:02:22.255456  366443 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-713577 --format={{.State.Status}}
	I0108 23:02:22.255665  366443 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-713577 --format={{.State.Status}}
	I0108 23:02:22.255601  366443 kapi.go:59] client config for ingress-addon-legacy-713577: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.key", CAFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:02:22.256487  366443 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 23:02:22.276682  366443 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:02:22.278055  366443 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 23:02:22.278073  366443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 23:02:22.278131  366443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-713577
	I0108 23:02:22.282261  366443 kapi.go:59] client config for ingress-addon-legacy-713577: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.key", CAFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:02:22.282645  366443 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-713577"
	I0108 23:02:22.282697  366443 host.go:66] Checking if "ingress-addon-legacy-713577" exists ...
	I0108 23:02:22.283330  366443 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-713577 --format={{.State.Status}}
	I0108 23:02:22.298567  366443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/ingress-addon-legacy-713577/id_rsa Username:docker}
	I0108 23:02:22.300197  366443 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 23:02:22.300219  366443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 23:02:22.300283  366443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-713577
	I0108 23:02:22.321832  366443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/ingress-addon-legacy-713577/id_rsa Username:docker}
	I0108 23:02:22.353867  366443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 23:02:22.468581  366443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 23:02:22.469567  366443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 23:02:22.759366  366443 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-713577" context rescaled to 1 replicas
	I0108 23:02:22.759416  366443 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 23:02:22.761168  366443 out.go:177] * Verifying Kubernetes components...
	I0108 23:02:22.763128  366443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:02:22.953773  366443 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 23:02:23.192305  366443 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0108 23:02:23.193992  366443 addons.go:508] enable addons completed in 939.126567ms: enabled=[default-storageclass storage-provisioner]
	I0108 23:02:23.191348  366443 kapi.go:59] client config for ingress-addon-legacy-713577: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.key", CAFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:02:23.194353  366443 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-713577" to be "Ready" ...
	I0108 23:02:25.197501  366443 node_ready.go:58] node "ingress-addon-legacy-713577" has status "Ready":"False"
	I0108 23:02:26.697937  366443 node_ready.go:49] node "ingress-addon-legacy-713577" has status "Ready":"True"
	I0108 23:02:26.697966  366443 node_ready.go:38] duration metric: took 3.503577648s waiting for node "ingress-addon-legacy-713577" to be "Ready" ...
	I0108 23:02:26.697983  366443 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:02:26.704549  366443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-t8x7p" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:28.708334  366443 pod_ready.go:102] pod "coredns-66bff467f8-t8x7p" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-08 23:02:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0108 23:02:31.210172  366443 pod_ready.go:102] pod "coredns-66bff467f8-t8x7p" in "kube-system" namespace has status "Ready":"False"
	I0108 23:02:33.710378  366443 pod_ready.go:102] pod "coredns-66bff467f8-t8x7p" in "kube-system" namespace has status "Ready":"False"
	I0108 23:02:36.210520  366443 pod_ready.go:92] pod "coredns-66bff467f8-t8x7p" in "kube-system" namespace has status "Ready":"True"
	I0108 23:02:36.210554  366443 pod_ready.go:81] duration metric: took 9.505976991s waiting for pod "coredns-66bff467f8-t8x7p" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:36.210564  366443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-713577" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:36.214786  366443 pod_ready.go:92] pod "etcd-ingress-addon-legacy-713577" in "kube-system" namespace has status "Ready":"True"
	I0108 23:02:36.214807  366443 pod_ready.go:81] duration metric: took 4.236557ms waiting for pod "etcd-ingress-addon-legacy-713577" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:36.214818  366443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-713577" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:36.218973  366443 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-713577" in "kube-system" namespace has status "Ready":"True"
	I0108 23:02:36.218995  366443 pod_ready.go:81] duration metric: took 4.170522ms waiting for pod "kube-apiserver-ingress-addon-legacy-713577" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:36.219004  366443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-713577" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:36.223016  366443 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-713577" in "kube-system" namespace has status "Ready":"True"
	I0108 23:02:36.223042  366443 pod_ready.go:81] duration metric: took 4.031461ms waiting for pod "kube-controller-manager-ingress-addon-legacy-713577" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:36.223051  366443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kllqh" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:36.226947  366443 pod_ready.go:92] pod "kube-proxy-kllqh" in "kube-system" namespace has status "Ready":"True"
	I0108 23:02:36.226970  366443 pod_ready.go:81] duration metric: took 3.913191ms waiting for pod "kube-proxy-kllqh" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:36.226979  366443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-713577" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:36.405358  366443 request.go:629] Waited for 178.292347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-713577
	I0108 23:02:36.605727  366443 request.go:629] Waited for 197.401231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-713577
	I0108 23:02:36.608630  366443 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-713577" in "kube-system" namespace has status "Ready":"True"
	I0108 23:02:36.608661  366443 pod_ready.go:81] duration metric: took 381.670695ms waiting for pod "kube-scheduler-ingress-addon-legacy-713577" in "kube-system" namespace to be "Ready" ...
	I0108 23:02:36.608678  366443 pod_ready.go:38] duration metric: took 9.910667532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:02:36.608702  366443 api_server.go:52] waiting for apiserver process to appear ...
	I0108 23:02:36.608769  366443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:02:36.619532  366443 api_server.go:72] duration metric: took 13.860080831s to wait for apiserver process to appear ...
	I0108 23:02:36.619554  366443 api_server.go:88] waiting for apiserver healthz status ...
	I0108 23:02:36.619573  366443 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 23:02:36.624228  366443 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 23:02:36.625031  366443 api_server.go:141] control plane version: v1.18.20
	I0108 23:02:36.625055  366443 api_server.go:131] duration metric: took 5.495734ms to wait for apiserver health ...
	I0108 23:02:36.625063  366443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 23:02:36.805419  366443 request.go:629] Waited for 180.263762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 23:02:36.811758  366443 system_pods.go:59] 8 kube-system pods found
	I0108 23:02:36.811792  366443 system_pods.go:61] "coredns-66bff467f8-t8x7p" [222f8037-c517-4e7a-b904-844d1bb92065] Running
	I0108 23:02:36.811798  366443 system_pods.go:61] "etcd-ingress-addon-legacy-713577" [9a6a7ac7-4528-48b0-889b-ae966f01a14f] Running
	I0108 23:02:36.811803  366443 system_pods.go:61] "kindnet-5pcgk" [8d00795a-7705-4b58-8f84-4e77882c8f2d] Running
	I0108 23:02:36.811807  366443 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-713577" [972252c0-4945-4a8e-9226-881fc14a8154] Running
	I0108 23:02:36.811811  366443 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-713577" [b845a2a4-6abf-4fca-9c28-1ded5024ed26] Running
	I0108 23:02:36.811815  366443 system_pods.go:61] "kube-proxy-kllqh" [6961cfd0-27da-4744-9414-9d54a86c8ac7] Running
	I0108 23:02:36.811820  366443 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-713577" [e8a7fca3-5c60-4d12-b236-2564d46e3bcd] Running
	I0108 23:02:36.811824  366443 system_pods.go:61] "storage-provisioner" [117b9237-3564-49de-806b-9c0673a4a0ac] Running
	I0108 23:02:36.811831  366443 system_pods.go:74] duration metric: took 186.76213ms to wait for pod list to return data ...
	I0108 23:02:36.811842  366443 default_sa.go:34] waiting for default service account to be created ...
	I0108 23:02:37.005356  366443 request.go:629] Waited for 193.432573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 23:02:37.007769  366443 default_sa.go:45] found service account: "default"
	I0108 23:02:37.007798  366443 default_sa.go:55] duration metric: took 195.949779ms for default service account to be created ...
	I0108 23:02:37.007808  366443 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 23:02:37.206265  366443 request.go:629] Waited for 198.36634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 23:02:37.212671  366443 system_pods.go:86] 8 kube-system pods found
	I0108 23:02:37.212714  366443 system_pods.go:89] "coredns-66bff467f8-t8x7p" [222f8037-c517-4e7a-b904-844d1bb92065] Running
	I0108 23:02:37.212723  366443 system_pods.go:89] "etcd-ingress-addon-legacy-713577" [9a6a7ac7-4528-48b0-889b-ae966f01a14f] Running
	I0108 23:02:37.212736  366443 system_pods.go:89] "kindnet-5pcgk" [8d00795a-7705-4b58-8f84-4e77882c8f2d] Running
	I0108 23:02:37.212742  366443 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-713577" [972252c0-4945-4a8e-9226-881fc14a8154] Running
	I0108 23:02:37.212750  366443 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-713577" [b845a2a4-6abf-4fca-9c28-1ded5024ed26] Running
	I0108 23:02:37.212757  366443 system_pods.go:89] "kube-proxy-kllqh" [6961cfd0-27da-4744-9414-9d54a86c8ac7] Running
	I0108 23:02:37.212764  366443 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-713577" [e8a7fca3-5c60-4d12-b236-2564d46e3bcd] Running
	I0108 23:02:37.212771  366443 system_pods.go:89] "storage-provisioner" [117b9237-3564-49de-806b-9c0673a4a0ac] Running
	I0108 23:02:37.212781  366443 system_pods.go:126] duration metric: took 204.965449ms to wait for k8s-apps to be running ...
	I0108 23:02:37.212795  366443 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 23:02:37.212868  366443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:02:37.224439  366443 system_svc.go:56] duration metric: took 11.63339ms WaitForService to wait for kubelet.
	I0108 23:02:37.224475  366443 kubeadm.go:581] duration metric: took 14.465025601s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 23:02:37.224502  366443 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:02:37.405923  366443 request.go:629] Waited for 181.246843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0108 23:02:37.408841  366443 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 23:02:37.408873  366443 node_conditions.go:123] node cpu capacity is 8
	I0108 23:02:37.408889  366443 node_conditions.go:105] duration metric: took 184.381515ms to run NodePressure ...
	I0108 23:02:37.408905  366443 start.go:228] waiting for startup goroutines ...
	I0108 23:02:37.408914  366443 start.go:233] waiting for cluster config update ...
	I0108 23:02:37.408927  366443 start.go:242] writing updated cluster config ...
	I0108 23:02:37.409264  366443 ssh_runner.go:195] Run: rm -f paused
	I0108 23:02:37.458808  366443 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0108 23:02:37.461060  366443 out.go:177] 
	W0108 23:02:37.462801  366443 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0108 23:02:37.464699  366443 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0108 23:02:37.467039  366443 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-713577" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 23:05:19 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:19.822306413Z" level=info msg="Started container" PID=4858 containerID=4b36f0bb0c295fa72ceed8ea8beeb2d7593c16e187e18c3f372e5e1e2b592c29 description=default/hello-world-app-5f5d8b66bb-s4t4q/hello-world-app id=39c9c048-c889-46a5-aa6b-3677e0538f19 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=45852cbb771307a15e25fcdc9a1745e73d4b8fe6b213c6a023b3881fec2990ba
	Jan 08 23:05:22 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:22.305993220Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=ff5006d6-2429-4e7e-9922-0f6e24c2ae8b name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 08 23:05:34 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:34.306992267Z" level=info msg="Stopping pod sandbox: 7ba96c7556b28e312ce3d4543e56c6a1daa7b2619368f81e0cab874b39f7c910" id=097bf260-adaa-4478-b256-1a1ce76795c4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 23:05:34 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:34.308277868Z" level=info msg="Stopped pod sandbox: 7ba96c7556b28e312ce3d4543e56c6a1daa7b2619368f81e0cab874b39f7c910" id=097bf260-adaa-4478-b256-1a1ce76795c4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 23:05:34 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:34.739424700Z" level=info msg="Stopping pod sandbox: 7ba96c7556b28e312ce3d4543e56c6a1daa7b2619368f81e0cab874b39f7c910" id=f920308f-b3ef-4e31-b526-59e5515da463 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 23:05:34 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:34.739485260Z" level=info msg="Stopped pod sandbox (already stopped): 7ba96c7556b28e312ce3d4543e56c6a1daa7b2619368f81e0cab874b39f7c910" id=f920308f-b3ef-4e31-b526-59e5515da463 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 23:05:35 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:35.535907831Z" level=info msg="Stopping container: 4f276a13bddfc002871a54b9ab763f6f0c8a602b08d5132eb581f957e4755927 (timeout: 2s)" id=fc41709e-2b9e-4e62-8672-3dcdb114d2a1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 23:05:35 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:35.538838643Z" level=info msg="Stopping container: 4f276a13bddfc002871a54b9ab763f6f0c8a602b08d5132eb581f957e4755927 (timeout: 2s)" id=5f418ea3-38f5-4de3-8de9-e1d6c0352777 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 23:05:36 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:36.305816904Z" level=info msg="Stopping pod sandbox: 7ba96c7556b28e312ce3d4543e56c6a1daa7b2619368f81e0cab874b39f7c910" id=ab6cc97d-f903-4765-a432-20c3f9d07eef name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 23:05:36 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:36.305888944Z" level=info msg="Stopped pod sandbox (already stopped): 7ba96c7556b28e312ce3d4543e56c6a1daa7b2619368f81e0cab874b39f7c910" id=ab6cc97d-f903-4765-a432-20c3f9d07eef name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.544588496Z" level=warning msg="Stopping container 4f276a13bddfc002871a54b9ab763f6f0c8a602b08d5132eb581f957e4755927 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=fc41709e-2b9e-4e62-8672-3dcdb114d2a1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 23:05:37 ingress-addon-legacy-713577 conmon[3400]: conmon 4f276a13bddfc002871a <ninfo>: container 3412 exited with status 137
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.690749588Z" level=info msg="Stopped container 4f276a13bddfc002871a54b9ab763f6f0c8a602b08d5132eb581f957e4755927: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dw5d5/controller" id=5f418ea3-38f5-4de3-8de9-e1d6c0352777 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.690794202Z" level=info msg="Stopped container 4f276a13bddfc002871a54b9ab763f6f0c8a602b08d5132eb581f957e4755927: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dw5d5/controller" id=fc41709e-2b9e-4e62-8672-3dcdb114d2a1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.691508296Z" level=info msg="Stopping pod sandbox: 4493b069b2d61bb28ea57ef17f437b0481b9e62afc32fbf4de2da7c6d3ab415a" id=32574f4a-66e6-4584-b090-d28448d9a980 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.691515455Z" level=info msg="Stopping pod sandbox: 4493b069b2d61bb28ea57ef17f437b0481b9e62afc32fbf4de2da7c6d3ab415a" id=2c5b0621-9a87-4ca5-abbe-4408de22e265 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.694642692Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-V6K6DJZVYUB6SKCD - [0:0]\n:KUBE-HP-WEVR7I3WRNWSI2SU - [0:0]\n-X KUBE-HP-V6K6DJZVYUB6SKCD\n-X KUBE-HP-WEVR7I3WRNWSI2SU\nCOMMIT\n"
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.696158491Z" level=info msg="Closing host port tcp:80"
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.696205205Z" level=info msg="Closing host port tcp:443"
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.697238920Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.697261255Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.697386233Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-dw5d5 Namespace:ingress-nginx ID:4493b069b2d61bb28ea57ef17f437b0481b9e62afc32fbf4de2da7c6d3ab415a UID:a779184a-d6e9-411a-8b8c-c368e4b1d7f3 NetNS:/var/run/netns/691c2049-4b9f-4fe1-977e-e1d8af8072dd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.697508554Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-dw5d5 from CNI network \"kindnet\" (type=ptp)"
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.728710321Z" level=info msg="Stopped pod sandbox: 4493b069b2d61bb28ea57ef17f437b0481b9e62afc32fbf4de2da7c6d3ab415a" id=32574f4a-66e6-4584-b090-d28448d9a980 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 23:05:37 ingress-addon-legacy-713577 crio[960]: time="2024-01-08 23:05:37.728839889Z" level=info msg="Stopped pod sandbox (already stopped): 4493b069b2d61bb28ea57ef17f437b0481b9e62afc32fbf4de2da7c6d3ab415a" id=2c5b0621-9a87-4ca5-abbe-4408de22e265 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4b36f0bb0c295       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            23 seconds ago      Running             hello-world-app           0                   45852cbb77130       hello-world-app-5f5d8b66bb-s4t4q
	81c8c93dae389       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   a371767262a23       nginx
	4f276a13bddfc       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   4493b069b2d61       ingress-nginx-controller-7fcf777cb7-dw5d5
	63ee7de7ee531       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   b3700ef37d266       ingress-nginx-admission-patch-hmswv
	a27ee8a90bc0e       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   2331ed42c5950       ingress-nginx-admission-create-2ppm7
	180bff8190f29       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   a369e486f7f87       coredns-66bff467f8-t8x7p
	9dc983b83fd91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   a888799525e9e       storage-provisioner
	2ad72fd1c83b6       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   ab993137cbf98       kindnet-5pcgk
	6467da867ce5a       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   1da12e1f27cdc       kube-proxy-kllqh
	5bf573551f360       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   daca0fa7e32b9       kube-controller-manager-ingress-addon-legacy-713577
	15edf70534c42       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   e5b3bbf00c944       etcd-ingress-addon-legacy-713577
	5ae4ee13e8ed8       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   e227484a05502       kube-scheduler-ingress-addon-legacy-713577
	aa5f7a60bb62a       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   2da7b723e61f2       kube-apiserver-ingress-addon-legacy-713577
	
	
	==> coredns [180bff8190f297b23578de35323256ea9c9b00a6de6ef8904ee7a29163c8a1ce] <==
	[INFO] 10.244.0.5:41720 - 60778 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004566468s
	[INFO] 10.244.0.5:38329 - 43189 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004567948s
	[INFO] 10.244.0.5:48250 - 46689 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004803067s
	[INFO] 10.244.0.5:41720 - 47231 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004463264s
	[INFO] 10.244.0.5:37351 - 57771 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004674302s
	[INFO] 10.244.0.5:51874 - 27851 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004689401s
	[INFO] 10.244.0.5:39758 - 2511 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004821005s
	[INFO] 10.244.0.5:37890 - 40903 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004735846s
	[INFO] 10.244.0.5:51397 - 53613 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004681891s
	[INFO] 10.244.0.5:41720 - 34162 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004465002s
	[INFO] 10.244.0.5:51874 - 43601 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004493977s
	[INFO] 10.244.0.5:37890 - 9871 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004246864s
	[INFO] 10.244.0.5:37351 - 22440 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004499305s
	[INFO] 10.244.0.5:39758 - 40848 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004376553s
	[INFO] 10.244.0.5:38329 - 44096 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00474726s
	[INFO] 10.244.0.5:51397 - 56738 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004335093s
	[INFO] 10.244.0.5:48250 - 60843 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004767047s
	[INFO] 10.244.0.5:41720 - 36003 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000170146s
	[INFO] 10.244.0.5:37890 - 23364 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006351s
	[INFO] 10.244.0.5:39758 - 10347 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053355s
	[INFO] 10.244.0.5:38329 - 22543 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071229s
	[INFO] 10.244.0.5:37351 - 21671 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054343s
	[INFO] 10.244.0.5:48250 - 53393 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000105617s
	[INFO] 10.244.0.5:51874 - 33369 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000315787s
	[INFO] 10.244.0.5:51397 - 27256 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000186033s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-713577
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-713577
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=ingress-addon-legacy-713577
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T23_02_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 23:02:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-713577
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 23:05:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 23:05:36 +0000   Mon, 08 Jan 2024 23:01:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 23:05:36 +0000   Mon, 08 Jan 2024 23:01:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 23:05:36 +0000   Mon, 08 Jan 2024 23:01:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 23:05:36 +0000   Mon, 08 Jan 2024 23:02:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-713577
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 693b4fe07dd44f0e987540d1d5592d1a
	  System UUID:                c8bca025-96f6-40cc-bbea-4636aeb133b8
	  Boot ID:                    fd589fcb-cd24-44e5-9159-e7f1d22abeda
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-s4t4q                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-t8x7p                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m21s
	  kube-system                 etcd-ingress-addon-legacy-713577                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kindnet-5pcgk                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m21s
	  kube-system                 kube-apiserver-ingress-addon-legacy-713577             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-713577    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-proxy-kllqh                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  kube-system                 kube-scheduler-ingress-addon-legacy-713577             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 3m45s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m45s (x3 over 3m45s)  kubelet     Node ingress-addon-legacy-713577 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m45s (x3 over 3m45s)  kubelet     Node ingress-addon-legacy-713577 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s (x3 over 3m45s)  kubelet     Node ingress-addon-legacy-713577 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m37s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m37s                  kubelet     Node ingress-addon-legacy-713577 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m37s                  kubelet     Node ingress-addon-legacy-713577 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m37s                  kubelet     Node ingress-addon-legacy-713577 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m20s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m17s                  kubelet     Node ingress-addon-legacy-713577 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.007357] FS-Cache: O-key=[8] 'b4a20f0200000000'
	[  +0.004934] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007782] FS-Cache: N-cookie d=000000003c359114{9p.inode} n=00000000801d8508
	[  +0.008729] FS-Cache: N-key=[8] 'b4a20f0200000000'
	[  +0.283629] FS-Cache: Duplicate cookie detected
	[  +0.004743] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.006795] FS-Cache: O-cookie d=000000003c359114{9p.inode} n=00000000dea4c64f
	[  +0.007371] FS-Cache: O-key=[8] 'baa20f0200000000'
	[  +0.004970] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007951] FS-Cache: N-cookie d=000000003c359114{9p.inode} n=00000000eafd936e
	[  +0.007348] FS-Cache: N-key=[8] 'baa20f0200000000'
	[Jan 8 23:03] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[  +1.007768] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[  +2.015863] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[  +4.127700] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[  +8.191395] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[ +16.126923] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[Jan 8 23:04] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	
	
	==> etcd [15edf70534c4204bc3f87a3c52ffc63b9c82769282f6693c1bc2504705a90770] <==
	raft2024/01/08 23:01:59 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 23:01:59.444403 W | auth: simple token is not cryptographically signed
	2024-01-08 23:01:59.448229 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-08 23:01:59.450347 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/08 23:01:59 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 23:01:59.450825 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-08 23:01:59.450964 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 23:01:59.451111 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-08 23:01:59.451214 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/08 23:01:59 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/08 23:01:59 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/08 23:01:59 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/08 23:01:59 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/08 23:01:59 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-08 23:01:59.780908 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-08 23:01:59.781873 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-08 23:01:59.781931 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-08 23:01:59.781969 I | embed: ready to serve client requests
	2024-01-08 23:01:59.782086 I | etcdserver: published {Name:ingress-addon-legacy-713577 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-08 23:01:59.782123 I | embed: ready to serve client requests
	2024-01-08 23:01:59.784442 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-08 23:01:59.784514 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-08 23:02:27.646057 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:711" took too long (110.698256ms) to execute
	2024-01-08 23:02:27.844943 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1107" took too long (131.006785ms) to execute
	2024-01-08 23:02:28.100798 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (141.857121ms) to execute
	
	
	==> kernel <==
	 23:05:43 up  3:47,  0 users,  load average: 0.24, 0.66, 0.58
	Linux ingress-addon-legacy-713577 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [2ad72fd1c83b6478b6ba29db3ba5013e93722aadc2d73c4e0bf0f12582b5192b] <==
	I0108 23:03:35.198956       1 main.go:227] handling current node
	I0108 23:03:45.210528       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:03:45.210556       1 main.go:227] handling current node
	I0108 23:03:55.213788       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:03:55.213813       1 main.go:227] handling current node
	I0108 23:04:05.226379       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:04:05.226405       1 main.go:227] handling current node
	I0108 23:04:15.230044       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:04:15.230068       1 main.go:227] handling current node
	I0108 23:04:25.233568       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:04:25.233593       1 main.go:227] handling current node
	I0108 23:04:35.238698       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:04:35.238725       1 main.go:227] handling current node
	I0108 23:04:45.250440       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:04:45.250465       1 main.go:227] handling current node
	I0108 23:04:55.254571       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:04:55.254603       1 main.go:227] handling current node
	I0108 23:05:05.266536       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:05:05.266566       1 main.go:227] handling current node
	I0108 23:05:15.271248       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:05:15.271308       1 main.go:227] handling current node
	I0108 23:05:25.274949       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:05:25.274979       1 main.go:227] handling current node
	I0108 23:05:35.287554       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 23:05:35.287594       1 main.go:227] handling current node
	
	
	==> kube-apiserver [aa5f7a60bb62acf0a34b2bc15047e300bfae45178c2dd43b053171751be4abbe] <==
	E0108 23:02:03.160521       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0108 23:02:03.264142       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0108 23:02:03.264279       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 23:02:03.265382       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 23:02:03.265424       1 cache.go:39] Caches are synced for autoregister controller
	I0108 23:02:03.265736       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0108 23:02:04.150045       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 23:02:04.150081       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 23:02:04.158161       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0108 23:02:04.161002       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0108 23:02:04.161023       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0108 23:02:04.436128       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 23:02:04.475978       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 23:02:04.570177       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0108 23:02:04.571126       1 controller.go:609] quota admission added evaluator for: endpoints
	I0108 23:02:04.573976       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 23:02:05.052488       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 23:02:05.521532       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0108 23:02:05.960037       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0108 23:02:06.109697       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0108 23:02:22.030883       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0108 23:02:22.458470       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0108 23:02:38.192625       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0108 23:02:56.066042       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0108 23:05:35.546681       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [5bf573551f3609ec6e2df0e581e0700b0e143d07a53184990c6837c135e9c4eb] <==
	I0108 23:02:22.053650       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-713577", UID:"97a26756-5770-4e2d-b612-921e0e7fa916", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-713577 event: Registered Node ingress-addon-legacy-713577 in Controller
	I0108 23:02:22.053664       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0108 23:02:22.204285       1 shared_informer.go:230] Caches are synced for attach detach 
	I0108 23:02:22.443787       1 shared_informer.go:230] Caches are synced for stateful set 
	I0108 23:02:22.454084       1 shared_informer.go:230] Caches are synced for deployment 
	I0108 23:02:22.461550       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"788eaa61-9ed8-4abf-95a6-db5ff41afb2c", APIVersion:"apps/v1", ResourceVersion:"354", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0108 23:02:22.543488       1 shared_informer.go:230] Caches are synced for disruption 
	I0108 23:02:22.543528       1 disruption.go:339] Sending events to api server.
	I0108 23:02:22.543521       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 23:02:22.543609       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 23:02:22.544680       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0108 23:02:22.555521       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 23:02:22.556461       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f8f2aaa8-3965-480a-bf98-6f05b88112d8", APIVersion:"apps/v1", ResourceVersion:"357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-t8x7p
	I0108 23:02:22.556599       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 23:02:22.556636       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 23:02:27.053885       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0108 23:02:38.183331       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5f1ab353-ac83-4e21-80f0-d966c21858dd", APIVersion:"apps/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0108 23:02:38.198301       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"dcd64634-329d-4bb3-8452-9c53b9f11048", APIVersion:"apps/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-dw5d5
	I0108 23:02:38.252982       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6cd4fa22-81b0-48e3-b121-216f0e27e7e7", APIVersion:"batch/v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-2ppm7
	I0108 23:02:38.269116       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"70368dba-cf88-4f7b-be66-1bbbdde645e1", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-hmswv
	I0108 23:02:40.459489       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6cd4fa22-81b0-48e3-b121-216f0e27e7e7", APIVersion:"batch/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 23:02:40.466763       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"70368dba-cf88-4f7b-be66-1bbbdde645e1", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 23:05:17.811805       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"312a8095-f969-45fb-bd33-2c3cc30aa709", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0108 23:05:17.818127       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"78cbf9f0-bb49-44ad-9241-93bc782304c5", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-s4t4q
	E0108 23:05:40.293522       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-wfv97" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [6467da867ce5ac9a2f105078a9a271a324c9836cb62592b3fa16b8e442e0afbd] <==
	W0108 23:02:23.074059       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0108 23:02:23.081916       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0108 23:02:23.082025       1 server_others.go:186] Using iptables Proxier.
	I0108 23:02:23.143757       1 server.go:583] Version: v1.18.20
	I0108 23:02:23.144832       1 config.go:315] Starting service config controller
	I0108 23:02:23.144918       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0108 23:02:23.144397       1 config.go:133] Starting endpoints config controller
	I0108 23:02:23.147352       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0108 23:02:23.245332       1 shared_informer.go:230] Caches are synced for service config 
	I0108 23:02:23.247554       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [5ae4ee13e8ed86b21a28710d00b6facd0c023aa6618d7cb409bcb708d7e8d98f] <==
	I0108 23:02:03.348903       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0108 23:02:03.348931       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0108 23:02:03.350889       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 23:02:03.350991       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 23:02:03.351298       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0108 23:02:03.351349       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0108 23:02:03.354389       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 23:02:03.354456       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 23:02:03.354544       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 23:02:03.357742       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 23:02:03.357819       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 23:02:03.357931       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 23:02:03.358010       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 23:02:03.357746       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 23:02:03.357760       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 23:02:03.357811       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 23:02:03.357998       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 23:02:03.358698       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 23:02:04.199767       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 23:02:04.199794       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 23:02:04.210231       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 23:02:04.231453       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 23:02:04.265456       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 23:02:04.327738       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 23:02:06.651235       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jan 08 23:04:58 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:04:58.306593    1850 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 23:04:58 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:04:58.306632    1850 pod_workers.go:191] Error syncing pod 64595748-c645-4a81-a733-7d5b51478af9 ("kube-ingress-dns-minikube_kube-system(64595748-c645-4a81-a733-7d5b51478af9)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 08 23:05:10 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:05:10.306357    1850 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 23:05:10 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:05:10.306401    1850 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 23:05:10 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:05:10.306456    1850 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 23:05:10 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:05:10.306492    1850 pod_workers.go:191] Error syncing pod 64595748-c645-4a81-a733-7d5b51478af9 ("kube-ingress-dns-minikube_kube-system(64595748-c645-4a81-a733-7d5b51478af9)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 08 23:05:17 ingress-addon-legacy-713577 kubelet[1850]: I0108 23:05:17.823141    1850 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 08 23:05:17 ingress-addon-legacy-713577 kubelet[1850]: I0108 23:05:17.954951    1850 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-x9nn4" (UniqueName: "kubernetes.io/secret/5ea93686-a819-4db8-9d8f-9d3b10c5e1fa-default-token-x9nn4") pod "hello-world-app-5f5d8b66bb-s4t4q" (UID: "5ea93686-a819-4db8-9d8f-9d3b10c5e1fa")
	Jan 08 23:05:18 ingress-addon-legacy-713577 kubelet[1850]: W0108 23:05:18.188186    1850 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/ec06eaaf4a978e1c14eb65a9fcaf456fe806393b73f157f25a3e94a0f0a16625/crio-45852cbb771307a15e25fcdc9a1745e73d4b8fe6b213c6a023b3881fec2990ba WatchSource:0}: Error finding container 45852cbb771307a15e25fcdc9a1745e73d4b8fe6b213c6a023b3881fec2990ba: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0003ed9c0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Jan 08 23:05:22 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:05:22.306347    1850 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 23:05:22 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:05:22.306393    1850 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 23:05:22 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:05:22.306448    1850 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 23:05:22 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:05:22.306486    1850 pod_workers.go:191] Error syncing pod 64595748-c645-4a81-a733-7d5b51478af9 ("kube-ingress-dns-minikube_kube-system(64595748-c645-4a81-a733-7d5b51478af9)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 08 23:05:33 ingress-addon-legacy-713577 kubelet[1850]: I0108 23:05:33.698377    1850 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-4m2hl" (UniqueName: "kubernetes.io/secret/64595748-c645-4a81-a733-7d5b51478af9-minikube-ingress-dns-token-4m2hl") pod "64595748-c645-4a81-a733-7d5b51478af9" (UID: "64595748-c645-4a81-a733-7d5b51478af9")
	Jan 08 23:05:33 ingress-addon-legacy-713577 kubelet[1850]: I0108 23:05:33.700349    1850 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64595748-c645-4a81-a733-7d5b51478af9-minikube-ingress-dns-token-4m2hl" (OuterVolumeSpecName: "minikube-ingress-dns-token-4m2hl") pod "64595748-c645-4a81-a733-7d5b51478af9" (UID: "64595748-c645-4a81-a733-7d5b51478af9"). InnerVolumeSpecName "minikube-ingress-dns-token-4m2hl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 23:05:33 ingress-addon-legacy-713577 kubelet[1850]: I0108 23:05:33.798724    1850 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-4m2hl" (UniqueName: "kubernetes.io/secret/64595748-c645-4a81-a733-7d5b51478af9-minikube-ingress-dns-token-4m2hl") on node "ingress-addon-legacy-713577" DevicePath ""
	Jan 08 23:05:35 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:05:35.537373    1850 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dw5d5.17a881c37a9d57b5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dw5d5", UID:"a779184a-d6e9-411a-8b8c-c368e4b1d7f3", APIVersion:"v1", ResourceVersion:"471", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-713577"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f3dcfdfeaa1b5, ext:209610150020, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f3dcfdfeaa1b5, ext:209610150020, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dw5d5.17a881c37a9d57b5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 23:05:35 ingress-addon-legacy-713577 kubelet[1850]: E0108 23:05:35.542023    1850 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dw5d5.17a881c37a9d57b5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dw5d5", UID:"a779184a-d6e9-411a-8b8c-c368e4b1d7f3", APIVersion:"v1", ResourceVersion:"471", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-713577"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f3dcfdfeaa1b5, ext:209610150020, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f3dcfe013e4c3, ext:209612854164, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dw5d5.17a881c37a9d57b5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 23:05:37 ingress-addon-legacy-713577 kubelet[1850]: W0108 23:05:37.732932    1850 pod_container_deletor.go:77] Container "4493b069b2d61bb28ea57ef17f437b0481b9e62afc32fbf4de2da7c6d3ab415a" not found in pod's containers
	Jan 08 23:05:39 ingress-addon-legacy-713577 kubelet[1850]: I0108 23:05:39.715034    1850 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-nmjpm" (UniqueName: "kubernetes.io/secret/a779184a-d6e9-411a-8b8c-c368e4b1d7f3-ingress-nginx-token-nmjpm") pod "a779184a-d6e9-411a-8b8c-c368e4b1d7f3" (UID: "a779184a-d6e9-411a-8b8c-c368e4b1d7f3")
	Jan 08 23:05:39 ingress-addon-legacy-713577 kubelet[1850]: I0108 23:05:39.715093    1850 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a779184a-d6e9-411a-8b8c-c368e4b1d7f3-webhook-cert") pod "a779184a-d6e9-411a-8b8c-c368e4b1d7f3" (UID: "a779184a-d6e9-411a-8b8c-c368e4b1d7f3")
	Jan 08 23:05:39 ingress-addon-legacy-713577 kubelet[1850]: I0108 23:05:39.717236    1850 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a779184a-d6e9-411a-8b8c-c368e4b1d7f3-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a779184a-d6e9-411a-8b8c-c368e4b1d7f3" (UID: "a779184a-d6e9-411a-8b8c-c368e4b1d7f3"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 23:05:39 ingress-addon-legacy-713577 kubelet[1850]: I0108 23:05:39.717364    1850 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a779184a-d6e9-411a-8b8c-c368e4b1d7f3-ingress-nginx-token-nmjpm" (OuterVolumeSpecName: "ingress-nginx-token-nmjpm") pod "a779184a-d6e9-411a-8b8c-c368e4b1d7f3" (UID: "a779184a-d6e9-411a-8b8c-c368e4b1d7f3"). InnerVolumeSpecName "ingress-nginx-token-nmjpm". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 23:05:39 ingress-addon-legacy-713577 kubelet[1850]: I0108 23:05:39.815451    1850 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a779184a-d6e9-411a-8b8c-c368e4b1d7f3-webhook-cert") on node "ingress-addon-legacy-713577" DevicePath ""
	Jan 08 23:05:39 ingress-addon-legacy-713577 kubelet[1850]: I0108 23:05:39.815494    1850 reconciler.go:319] Volume detached for volume "ingress-nginx-token-nmjpm" (UniqueName: "kubernetes.io/secret/a779184a-d6e9-411a-8b8c-c368e4b1d7f3-ingress-nginx-token-nmjpm") on node "ingress-addon-legacy-713577" DevicePath ""
	
	
	==> storage-provisioner [9dc983b83fd916a64e1a325661a826d3c2968c567716c6edcc6d9cb1aa60f4d6] <==
	I0108 23:02:27.526608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 23:02:27.534069       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 23:02:27.534115       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 23:02:27.712812       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 23:02:27.712988       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-713577_a28daaa9-8b63-4004-99f8-39b8f98bd9c1!
	I0108 23:02:27.712947       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c9ea26e1-f70f-4b51-a758-f45d4de360b7", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-713577_a28daaa9-8b63-4004-99f8-39b8f98bd9c1 became leader
	I0108 23:02:27.813700       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-713577_a28daaa9-8b63-4004-99f8-39b8f98bd9c1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-713577 -n ingress-addon-legacy-713577
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-713577 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (176.64s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-d8rhc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-d8rhc -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-d8rhc -- sh -c "ping -c 1 192.168.58.1": exit status 1 (191.571675ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-d8rhc): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-wpl2n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-wpl2n -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-wpl2n -- sh -c "ping -c 1 192.168.58.1": exit status 1 (182.496863ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-wpl2n): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-659947
helpers_test.go:235: (dbg) docker inspect multinode-659947:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc",
	        "Created": "2024-01-08T23:10:53.409847328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 411802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T23:10:53.710178703Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a68510052ff42a82cad4cbbd1f236376dac91176d14d2a924a5e2b18f7ff0a23",
	        "ResolvConfPath": "/var/lib/docker/containers/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc/hosts",
	        "LogPath": "/var/lib/docker/containers/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc-json.log",
	        "Name": "/multinode-659947",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-659947:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-659947",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7a4c3ebc595d2bcad9bdccdd2f0cc56b81c1b3b7b16404dc62ab15e785ef6293-init/diff:/var/lib/docker/overlay2/5d41a77db4225bbdb2799c0759ad4432ee2e97ed824f853dc9d7fa3db67a2cbc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a4c3ebc595d2bcad9bdccdd2f0cc56b81c1b3b7b16404dc62ab15e785ef6293/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a4c3ebc595d2bcad9bdccdd2f0cc56b81c1b3b7b16404dc62ab15e785ef6293/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a4c3ebc595d2bcad9bdccdd2f0cc56b81c1b3b7b16404dc62ab15e785ef6293/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-659947",
	                "Source": "/var/lib/docker/volumes/multinode-659947/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-659947",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-659947",
	                "name.minikube.sigs.k8s.io": "multinode-659947",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "378d0d14bad08e77120e5553bbf3df8186b0a1a30de2eda344e7486f509d9bdb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/378d0d14bad0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-659947": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5d2b1864fb29",
	                        "multinode-659947"
	                    ],
	                    "NetworkID": "c5151302c83dae69fe524913e7223ee26b694615602a2ddbc1037162fa48c6c7",
	                    "EndpointID": "d4520589605682c5f29cc90e41bbae27affea44f6c4fbbda36f00370d9af146c",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-659947 -n multinode-659947
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-659947 logs -n 25: (1.334411529s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-884208                           | mount-start-2-884208 | jenkins | v1.32.0 | 08 Jan 24 23:10 UTC | 08 Jan 24 23:10 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-884208 ssh -- ls                    | mount-start-2-884208 | jenkins | v1.32.0 | 08 Jan 24 23:10 UTC | 08 Jan 24 23:10 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-865282                           | mount-start-1-865282 | jenkins | v1.32.0 | 08 Jan 24 23:10 UTC | 08 Jan 24 23:10 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-884208 ssh -- ls                    | mount-start-2-884208 | jenkins | v1.32.0 | 08 Jan 24 23:10 UTC | 08 Jan 24 23:10 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-884208                           | mount-start-2-884208 | jenkins | v1.32.0 | 08 Jan 24 23:10 UTC | 08 Jan 24 23:10 UTC |
	| start   | -p mount-start-2-884208                           | mount-start-2-884208 | jenkins | v1.32.0 | 08 Jan 24 23:10 UTC | 08 Jan 24 23:10 UTC |
	| ssh     | mount-start-2-884208 ssh -- ls                    | mount-start-2-884208 | jenkins | v1.32.0 | 08 Jan 24 23:10 UTC | 08 Jan 24 23:10 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-884208                           | mount-start-2-884208 | jenkins | v1.32.0 | 08 Jan 24 23:10 UTC | 08 Jan 24 23:10 UTC |
	| delete  | -p mount-start-1-865282                           | mount-start-1-865282 | jenkins | v1.32.0 | 08 Jan 24 23:10 UTC | 08 Jan 24 23:10 UTC |
	| start   | -p multinode-659947                               | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:10 UTC | 08 Jan 24 23:12 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- apply -f                   | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- rollout                    | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- get pods -o                | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- get pods -o                | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- exec                       | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | busybox-5bc68d56bd-d8rhc --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- exec                       | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | busybox-5bc68d56bd-wpl2n --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- exec                       | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | busybox-5bc68d56bd-d8rhc --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- exec                       | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | busybox-5bc68d56bd-wpl2n --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- exec                       | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | busybox-5bc68d56bd-d8rhc -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- exec                       | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | busybox-5bc68d56bd-wpl2n -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- get pods -o                | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- exec                       | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | busybox-5bc68d56bd-d8rhc                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- exec                       | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC |                     |
	|         | busybox-5bc68d56bd-d8rhc -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- exec                       | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|         | busybox-5bc68d56bd-wpl2n                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-659947 -- exec                       | multinode-659947     | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC |                     |
	|         | busybox-5bc68d56bd-wpl2n -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 23:10:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 23:10:47.228182  411209 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:10:47.228351  411209 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:10:47.228362  411209 out.go:309] Setting ErrFile to fd 2...
	I0108 23:10:47.228368  411209 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:10:47.228609  411209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	I0108 23:10:47.229224  411209 out.go:303] Setting JSON to false
	I0108 23:10:47.230735  411209 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13979,"bootTime":1704741468,"procs":884,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:10:47.230802  411209 start.go:138] virtualization: kvm guest
	I0108 23:10:47.233348  411209 out.go:177] * [multinode-659947] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:10:47.234939  411209 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:10:47.235038  411209 notify.go:220] Checking for updates...
	I0108 23:10:47.236334  411209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:10:47.238070  411209 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:10:47.239650  411209 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	I0108 23:10:47.241111  411209 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:10:47.242507  411209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:10:47.244100  411209 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:10:47.269891  411209 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 23:10:47.270020  411209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:10:47.320991  411209 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-08 23:10:47.312007393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:10:47.321104  411209 docker.go:295] overlay module found
	I0108 23:10:47.323081  411209 out.go:177] * Using the docker driver based on user configuration
	I0108 23:10:47.324405  411209 start.go:298] selected driver: docker
	I0108 23:10:47.324418  411209 start.go:902] validating driver "docker" against <nil>
	I0108 23:10:47.324429  411209 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:10:47.325194  411209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:10:47.383250  411209 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-08 23:10:47.374430498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:10:47.383473  411209 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 23:10:47.383704  411209 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 23:10:47.385666  411209 out.go:177] * Using Docker driver with root privileges
	I0108 23:10:47.387041  411209 cni.go:84] Creating CNI manager for ""
	I0108 23:10:47.387058  411209 cni.go:136] 0 nodes found, recommending kindnet
	I0108 23:10:47.387069  411209 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 23:10:47.387104  411209 start_flags.go:323] config:
	{Name:multinode-659947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-659947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:10:47.388835  411209 out.go:177] * Starting control plane node multinode-659947 in cluster multinode-659947
	I0108 23:10:47.390305  411209 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 23:10:47.391668  411209 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0108 23:10:47.392956  411209 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:10:47.392998  411209 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 23:10:47.393008  411209 cache.go:56] Caching tarball of preloaded images
	I0108 23:10:47.393043  411209 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0108 23:10:47.393113  411209 preload.go:174] Found /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 23:10:47.393125  411209 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 23:10:47.393497  411209 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/config.json ...
	I0108 23:10:47.393522  411209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/config.json: {Name:mk27fe33558e60e485af772d7d9ec8cb3a20a390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:10:47.409987  411209 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon, skipping pull
	I0108 23:10:47.410032  411209 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in daemon, skipping load
	I0108 23:10:47.410054  411209 cache.go:194] Successfully downloaded all kic artifacts
	I0108 23:10:47.410102  411209 start.go:365] acquiring machines lock for multinode-659947: {Name:mk6fd139711b1946ca3591f55a39762658b2f0ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:10:47.410241  411209 start.go:369] acquired machines lock for "multinode-659947" in 110.533µs
	I0108 23:10:47.410271  411209 start.go:93] Provisioning new machine with config: &{Name:multinode-659947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-659947 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 23:10:47.410407  411209 start.go:125] createHost starting for "" (driver="docker")
	I0108 23:10:47.412612  411209 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 23:10:47.412942  411209 start.go:159] libmachine.API.Create for "multinode-659947" (driver="docker")
	I0108 23:10:47.412993  411209 client.go:168] LocalClient.Create starting
	I0108 23:10:47.413117  411209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem
	I0108 23:10:47.413177  411209 main.go:141] libmachine: Decoding PEM data...
	I0108 23:10:47.413205  411209 main.go:141] libmachine: Parsing certificate...
	I0108 23:10:47.413284  411209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem
	I0108 23:10:47.413313  411209 main.go:141] libmachine: Decoding PEM data...
	I0108 23:10:47.413327  411209 main.go:141] libmachine: Parsing certificate...
	I0108 23:10:47.413783  411209 cli_runner.go:164] Run: docker network inspect multinode-659947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 23:10:47.430346  411209 cli_runner.go:211] docker network inspect multinode-659947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 23:10:47.430419  411209 network_create.go:281] running [docker network inspect multinode-659947] to gather additional debugging logs...
	I0108 23:10:47.430442  411209 cli_runner.go:164] Run: docker network inspect multinode-659947
	W0108 23:10:47.447985  411209 cli_runner.go:211] docker network inspect multinode-659947 returned with exit code 1
	I0108 23:10:47.448029  411209 network_create.go:284] error running [docker network inspect multinode-659947]: docker network inspect multinode-659947: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-659947 not found
	I0108 23:10:47.448042  411209 network_create.go:286] output of [docker network inspect multinode-659947]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-659947 not found
	
	** /stderr **
	I0108 23:10:47.448141  411209 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 23:10:47.465068  411209 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-880b87d22d94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f7:7c:b5:26} reservation:<nil>}
	I0108 23:10:47.465561  411209 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00289a5b0}
	I0108 23:10:47.465592  411209 network_create.go:124] attempt to create docker network multinode-659947 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0108 23:10:47.465652  411209 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-659947 multinode-659947
	I0108 23:10:47.519667  411209 network_create.go:108] docker network multinode-659947 192.168.58.0/24 created
	I0108 23:10:47.519707  411209 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-659947" container
	I0108 23:10:47.519772  411209 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 23:10:47.536307  411209 cli_runner.go:164] Run: docker volume create multinode-659947 --label name.minikube.sigs.k8s.io=multinode-659947 --label created_by.minikube.sigs.k8s.io=true
	I0108 23:10:47.554082  411209 oci.go:103] Successfully created a docker volume multinode-659947
	I0108 23:10:47.554206  411209 cli_runner.go:164] Run: docker run --rm --name multinode-659947-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-659947 --entrypoint /usr/bin/test -v multinode-659947:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib
	I0108 23:10:48.090999  411209 oci.go:107] Successfully prepared a docker volume multinode-659947
	I0108 23:10:48.091053  411209 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:10:48.091077  411209 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 23:10:48.091153  411209 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-659947:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 23:10:53.344750  411209 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-659947:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir: (5.253543951s)
	I0108 23:10:53.344789  411209 kic.go:203] duration metric: took 5.253707 seconds to extract preloaded images to volume
	W0108 23:10:53.344947  411209 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 23:10:53.345067  411209 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 23:10:53.395149  411209 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-659947 --name multinode-659947 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-659947 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-659947 --network multinode-659947 --ip 192.168.58.2 --volume multinode-659947:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0108 23:10:53.718765  411209 cli_runner.go:164] Run: docker container inspect multinode-659947 --format={{.State.Running}}
	I0108 23:10:53.737440  411209 cli_runner.go:164] Run: docker container inspect multinode-659947 --format={{.State.Status}}
	I0108 23:10:53.754422  411209 cli_runner.go:164] Run: docker exec multinode-659947 stat /var/lib/dpkg/alternatives/iptables
	I0108 23:10:53.791360  411209 oci.go:144] the created container "multinode-659947" has a running status.
	I0108 23:10:53.791404  411209 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa...
	I0108 23:10:53.844924  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 23:10:53.844980  411209 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 23:10:53.866087  411209 cli_runner.go:164] Run: docker container inspect multinode-659947 --format={{.State.Status}}
	I0108 23:10:53.884416  411209 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 23:10:53.884441  411209 kic_runner.go:114] Args: [docker exec --privileged multinode-659947 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 23:10:53.949736  411209 cli_runner.go:164] Run: docker container inspect multinode-659947 --format={{.State.Status}}
	I0108 23:10:53.966715  411209 machine.go:88] provisioning docker machine ...
	I0108 23:10:53.966752  411209 ubuntu.go:169] provisioning hostname "multinode-659947"
	I0108 23:10:53.966812  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:10:53.983965  411209 main.go:141] libmachine: Using SSH client type: native
	I0108 23:10:53.984343  411209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I0108 23:10:53.984359  411209 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-659947 && echo "multinode-659947" | sudo tee /etc/hostname
	I0108 23:10:53.984963  411209 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54420->127.0.0.1:33149: read: connection reset by peer
	I0108 23:10:57.129729  411209 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-659947
	
	I0108 23:10:57.129810  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:10:57.146439  411209 main.go:141] libmachine: Using SSH client type: native
	I0108 23:10:57.146791  411209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I0108 23:10:57.146818  411209 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-659947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-659947/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-659947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:10:57.279488  411209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:10:57.279523  411209 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-321683/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-321683/.minikube}
	I0108 23:10:57.279564  411209 ubuntu.go:177] setting up certificates
	I0108 23:10:57.279576  411209 provision.go:83] configureAuth start
	I0108 23:10:57.279639  411209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-659947
	I0108 23:10:57.298795  411209 provision.go:138] copyHostCerts
	I0108 23:10:57.298834  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem
	I0108 23:10:57.298863  411209 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem, removing ...
	I0108 23:10:57.298871  411209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem
	I0108 23:10:57.298934  411209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem (1679 bytes)
	I0108 23:10:57.299020  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem
	I0108 23:10:57.299038  411209 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem, removing ...
	I0108 23:10:57.299044  411209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem
	I0108 23:10:57.299073  411209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem (1082 bytes)
	I0108 23:10:57.299127  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem
	I0108 23:10:57.299143  411209 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem, removing ...
	I0108 23:10:57.299148  411209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem
	I0108 23:10:57.299168  411209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem (1123 bytes)
	I0108 23:10:57.299232  411209 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem org=jenkins.multinode-659947 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-659947]
	I0108 23:10:57.708964  411209 provision.go:172] copyRemoteCerts
	I0108 23:10:57.709037  411209 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:10:57.709074  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:10:57.726639  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa Username:docker}
	I0108 23:10:57.824434  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 23:10:57.824498  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:10:57.848158  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 23:10:57.848241  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 23:10:57.872322  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 23:10:57.872389  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 23:10:57.895377  411209 provision.go:86] duration metric: configureAuth took 615.783242ms
	I0108 23:10:57.895406  411209 ubuntu.go:193] setting minikube options for container-runtime
	I0108 23:10:57.895626  411209 config.go:182] Loaded profile config "multinode-659947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:10:57.895751  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:10:57.913275  411209 main.go:141] libmachine: Using SSH client type: native
	I0108 23:10:57.913696  411209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I0108 23:10:57.913717  411209 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:10:58.133584  411209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:10:58.133622  411209 machine.go:91] provisioned docker machine in 4.166877787s
	I0108 23:10:58.133633  411209 client.go:171] LocalClient.Create took 10.720629069s
	I0108 23:10:58.133652  411209 start.go:167] duration metric: libmachine.API.Create for "multinode-659947" took 10.720713373s
	I0108 23:10:58.133658  411209 start.go:300] post-start starting for "multinode-659947" (driver="docker")
	I0108 23:10:58.133669  411209 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:10:58.133731  411209 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:10:58.133769  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:10:58.150331  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa Username:docker}
	I0108 23:10:58.252103  411209 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:10:58.255363  411209 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0108 23:10:58.255388  411209 command_runner.go:130] > NAME="Ubuntu"
	I0108 23:10:58.255399  411209 command_runner.go:130] > VERSION_ID="22.04"
	I0108 23:10:58.255407  411209 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0108 23:10:58.255417  411209 command_runner.go:130] > VERSION_CODENAME=jammy
	I0108 23:10:58.255424  411209 command_runner.go:130] > ID=ubuntu
	I0108 23:10:58.255430  411209 command_runner.go:130] > ID_LIKE=debian
	I0108 23:10:58.255439  411209 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 23:10:58.255451  411209 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 23:10:58.255464  411209 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 23:10:58.255555  411209 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 23:10:58.255581  411209 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0108 23:10:58.255675  411209 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 23:10:58.255698  411209 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 23:10:58.255707  411209 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 23:10:58.255717  411209 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 23:10:58.255733  411209 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/addons for local assets ...
	I0108 23:10:58.255794  411209 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/files for local assets ...
	I0108 23:10:58.255876  411209 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem -> 3283842.pem in /etc/ssl/certs
	I0108 23:10:58.255894  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem -> /etc/ssl/certs/3283842.pem
	I0108 23:10:58.255993  411209 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:10:58.264132  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem --> /etc/ssl/certs/3283842.pem (1708 bytes)
	I0108 23:10:58.286407  411209 start.go:303] post-start completed in 152.733574ms
	I0108 23:10:58.286760  411209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-659947
	I0108 23:10:58.302959  411209 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/config.json ...
	I0108 23:10:58.303214  411209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:10:58.303253  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:10:58.319444  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa Username:docker}
	I0108 23:10:58.411833  411209 command_runner.go:130] > 24%!
	(MISSING)I0108 23:10:58.412139  411209 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 23:10:58.416076  411209 command_runner.go:130] > 223G
	I0108 23:10:58.416299  411209 start.go:128] duration metric: createHost completed in 11.005878549s
	I0108 23:10:58.416317  411209 start.go:83] releasing machines lock for "multinode-659947", held for 11.006063814s
	I0108 23:10:58.416384  411209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-659947
	I0108 23:10:58.432453  411209 ssh_runner.go:195] Run: cat /version.json
	I0108 23:10:58.432523  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:10:58.432549  411209 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:10:58.432604  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:10:58.455530  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa Username:docker}
	I0108 23:10:58.455990  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa Username:docker}
	I0108 23:10:58.638066  411209 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 23:10:58.640405  411209 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1704751654-17830", "minikube_version": "v1.32.0", "commit": "8e62236f86fac88150e437f293b77692cc68cda5"}
	I0108 23:10:58.640549  411209 ssh_runner.go:195] Run: systemctl --version
	I0108 23:10:58.644647  411209 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0108 23:10:58.644700  411209 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0108 23:10:58.644788  411209 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:10:58.783348  411209 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:10:58.787732  411209 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0108 23:10:58.787755  411209 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0108 23:10:58.787761  411209 command_runner.go:130] > Device: 37h/55d	Inode: 1044333     Links: 1
	I0108 23:10:58.787767  411209 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:10:58.787773  411209 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0108 23:10:58.787778  411209 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0108 23:10:58.787787  411209 command_runner.go:130] > Change: 2024-01-08 22:52:04.683201688 +0000
	I0108 23:10:58.787795  411209 command_runner.go:130] >  Birth: 2024-01-08 22:52:04.683201688 +0000
	I0108 23:10:58.787845  411209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:10:58.807124  411209 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 23:10:58.807211  411209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:10:58.836778  411209 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0108 23:10:58.836854  411209 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 23:10:58.836865  411209 start.go:475] detecting cgroup driver to use...
	I0108 23:10:58.836901  411209 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 23:10:58.836942  411209 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:10:58.851704  411209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:10:58.862899  411209 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:10:58.862972  411209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:10:58.875966  411209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:10:58.889812  411209 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 23:10:58.970778  411209 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:10:58.985000  411209 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 23:10:59.053228  411209 docker.go:219] disabling docker service ...
	I0108 23:10:59.053309  411209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:10:59.071888  411209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:10:59.083012  411209 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:10:59.094340  411209 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 23:10:59.166678  411209 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:10:59.253975  411209 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 23:10:59.254070  411209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:10:59.264974  411209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:10:59.279638  411209 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 23:10:59.280549  411209 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 23:10:59.280612  411209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:10:59.289837  411209 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 23:10:59.289918  411209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:10:59.300271  411209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:10:59.309731  411209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:10:59.318917  411209 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 23:10:59.328016  411209 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 23:10:59.335373  411209 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 23:10:59.336122  411209 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 23:10:59.344305  411209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 23:10:59.415813  411209 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 23:10:59.515315  411209 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 23:10:59.515401  411209 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 23:10:59.519012  411209 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 23:10:59.519042  411209 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 23:10:59.519057  411209 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0108 23:10:59.519067  411209 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:10:59.519075  411209 command_runner.go:130] > Access: 2024-01-08 23:10:59.501502670 +0000
	I0108 23:10:59.519092  411209 command_runner.go:130] > Modify: 2024-01-08 23:10:59.501502670 +0000
	I0108 23:10:59.519105  411209 command_runner.go:130] > Change: 2024-01-08 23:10:59.501502670 +0000
	I0108 23:10:59.519112  411209 command_runner.go:130] >  Birth: -
	I0108 23:10:59.519138  411209 start.go:543] Will wait 60s for crictl version
	I0108 23:10:59.519183  411209 ssh_runner.go:195] Run: which crictl
	I0108 23:10:59.522460  411209 command_runner.go:130] > /usr/bin/crictl
	I0108 23:10:59.522546  411209 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 23:10:59.553573  411209 command_runner.go:130] > Version:  0.1.0
	I0108 23:10:59.553595  411209 command_runner.go:130] > RuntimeName:  cri-o
	I0108 23:10:59.553600  411209 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0108 23:10:59.553606  411209 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 23:10:59.556057  411209 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 23:10:59.556159  411209 ssh_runner.go:195] Run: crio --version
	I0108 23:10:59.589980  411209 command_runner.go:130] > crio version 1.24.6
	I0108 23:10:59.590004  411209 command_runner.go:130] > Version:          1.24.6
	I0108 23:10:59.590015  411209 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 23:10:59.590023  411209 command_runner.go:130] > GitTreeState:     clean
	I0108 23:10:59.590033  411209 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 23:10:59.590041  411209 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 23:10:59.590048  411209 command_runner.go:130] > Compiler:         gc
	I0108 23:10:59.590055  411209 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:10:59.590062  411209 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:10:59.590073  411209 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:10:59.590093  411209 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:10:59.590103  411209 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:10:59.591732  411209 ssh_runner.go:195] Run: crio --version
	I0108 23:10:59.627796  411209 command_runner.go:130] > crio version 1.24.6
	I0108 23:10:59.627819  411209 command_runner.go:130] > Version:          1.24.6
	I0108 23:10:59.627826  411209 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 23:10:59.627830  411209 command_runner.go:130] > GitTreeState:     clean
	I0108 23:10:59.627836  411209 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 23:10:59.627840  411209 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 23:10:59.627844  411209 command_runner.go:130] > Compiler:         gc
	I0108 23:10:59.627849  411209 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:10:59.627854  411209 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:10:59.627861  411209 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:10:59.627865  411209 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:10:59.627869  411209 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:10:59.630147  411209 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 23:10:59.631690  411209 cli_runner.go:164] Run: docker network inspect multinode-659947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 23:10:59.648610  411209 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0108 23:10:59.652385  411209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:10:59.662969  411209 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:10:59.663033  411209 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:10:59.715213  411209 command_runner.go:130] > {
	I0108 23:10:59.715236  411209 command_runner.go:130] >   "images": [
	I0108 23:10:59.715240  411209 command_runner.go:130] >     {
	I0108 23:10:59.715248  411209 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 23:10:59.715253  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.715284  411209 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 23:10:59.715290  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715297  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.715320  411209 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 23:10:59.715335  411209 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 23:10:59.715342  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715349  411209 command_runner.go:130] >       "size": "65258016",
	I0108 23:10:59.715359  411209 command_runner.go:130] >       "uid": null,
	I0108 23:10:59.715366  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.715382  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.715389  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.715393  411209 command_runner.go:130] >     },
	I0108 23:10:59.715396  411209 command_runner.go:130] >     {
	I0108 23:10:59.715402  411209 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 23:10:59.715409  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.715414  411209 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 23:10:59.715422  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715427  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.715434  411209 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 23:10:59.715444  411209 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 23:10:59.715448  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715458  411209 command_runner.go:130] >       "size": "31470524",
	I0108 23:10:59.715463  411209 command_runner.go:130] >       "uid": null,
	I0108 23:10:59.715467  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.715471  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.715475  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.715479  411209 command_runner.go:130] >     },
	I0108 23:10:59.715486  411209 command_runner.go:130] >     {
	I0108 23:10:59.715495  411209 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 23:10:59.715499  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.715504  411209 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 23:10:59.715508  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715512  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.715520  411209 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 23:10:59.715530  411209 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 23:10:59.715534  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715539  411209 command_runner.go:130] >       "size": "53621675",
	I0108 23:10:59.715545  411209 command_runner.go:130] >       "uid": null,
	I0108 23:10:59.715553  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.715559  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.715563  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.715567  411209 command_runner.go:130] >     },
	I0108 23:10:59.715571  411209 command_runner.go:130] >     {
	I0108 23:10:59.715577  411209 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 23:10:59.715583  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.715590  411209 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 23:10:59.715596  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715601  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.715607  411209 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 23:10:59.715615  411209 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 23:10:59.715630  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715636  411209 command_runner.go:130] >       "size": "295456551",
	I0108 23:10:59.715640  411209 command_runner.go:130] >       "uid": {
	I0108 23:10:59.715645  411209 command_runner.go:130] >         "value": "0"
	I0108 23:10:59.715651  411209 command_runner.go:130] >       },
	I0108 23:10:59.715655  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.715661  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.715665  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.715671  411209 command_runner.go:130] >     },
	I0108 23:10:59.715674  411209 command_runner.go:130] >     {
	I0108 23:10:59.715682  411209 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 23:10:59.715686  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.715694  411209 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 23:10:59.715701  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715707  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.715715  411209 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 23:10:59.715724  411209 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 23:10:59.715729  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715735  411209 command_runner.go:130] >       "size": "127226832",
	I0108 23:10:59.715739  411209 command_runner.go:130] >       "uid": {
	I0108 23:10:59.715746  411209 command_runner.go:130] >         "value": "0"
	I0108 23:10:59.715750  411209 command_runner.go:130] >       },
	I0108 23:10:59.715756  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.715760  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.715766  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.715770  411209 command_runner.go:130] >     },
	I0108 23:10:59.715775  411209 command_runner.go:130] >     {
	I0108 23:10:59.715781  411209 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 23:10:59.715786  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.715791  411209 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 23:10:59.715797  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715803  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.715813  411209 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 23:10:59.715821  411209 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 23:10:59.715827  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715831  411209 command_runner.go:130] >       "size": "123261750",
	I0108 23:10:59.715835  411209 command_runner.go:130] >       "uid": {
	I0108 23:10:59.715842  411209 command_runner.go:130] >         "value": "0"
	I0108 23:10:59.715845  411209 command_runner.go:130] >       },
	I0108 23:10:59.715849  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.715854  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.715859  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.715863  411209 command_runner.go:130] >     },
	I0108 23:10:59.715866  411209 command_runner.go:130] >     {
	I0108 23:10:59.715873  411209 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 23:10:59.715879  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.715884  411209 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 23:10:59.715887  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715892  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.715903  411209 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 23:10:59.715911  411209 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 23:10:59.715915  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715919  411209 command_runner.go:130] >       "size": "74749335",
	I0108 23:10:59.715923  411209 command_runner.go:130] >       "uid": null,
	I0108 23:10:59.715927  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.715932  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.715936  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.715939  411209 command_runner.go:130] >     },
	I0108 23:10:59.715943  411209 command_runner.go:130] >     {
	I0108 23:10:59.715949  411209 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 23:10:59.715955  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.715960  411209 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 23:10:59.715967  411209 command_runner.go:130] >       ],
	I0108 23:10:59.715970  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.715990  411209 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 23:10:59.716000  411209 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 23:10:59.716003  411209 command_runner.go:130] >       ],
	I0108 23:10:59.716010  411209 command_runner.go:130] >       "size": "61551410",
	I0108 23:10:59.716017  411209 command_runner.go:130] >       "uid": {
	I0108 23:10:59.716020  411209 command_runner.go:130] >         "value": "0"
	I0108 23:10:59.716024  411209 command_runner.go:130] >       },
	I0108 23:10:59.716028  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.716035  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.716042  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.716051  411209 command_runner.go:130] >     },
	I0108 23:10:59.716056  411209 command_runner.go:130] >     {
	I0108 23:10:59.716065  411209 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 23:10:59.716069  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.716077  411209 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 23:10:59.716080  411209 command_runner.go:130] >       ],
	I0108 23:10:59.716087  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.716093  411209 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 23:10:59.716102  411209 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 23:10:59.716112  411209 command_runner.go:130] >       ],
	I0108 23:10:59.716118  411209 command_runner.go:130] >       "size": "750414",
	I0108 23:10:59.716125  411209 command_runner.go:130] >       "uid": {
	I0108 23:10:59.716130  411209 command_runner.go:130] >         "value": "65535"
	I0108 23:10:59.716134  411209 command_runner.go:130] >       },
	I0108 23:10:59.716140  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.716144  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.716151  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.716154  411209 command_runner.go:130] >     }
	I0108 23:10:59.716158  411209 command_runner.go:130] >   ]
	I0108 23:10:59.716161  411209 command_runner.go:130] > }
	I0108 23:10:59.717672  411209 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 23:10:59.717693  411209 crio.go:415] Images already preloaded, skipping extraction
	I0108 23:10:59.717739  411209 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:10:59.747354  411209 command_runner.go:130] > {
	I0108 23:10:59.747376  411209 command_runner.go:130] >   "images": [
	I0108 23:10:59.747381  411209 command_runner.go:130] >     {
	I0108 23:10:59.747392  411209 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 23:10:59.747397  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.747402  411209 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 23:10:59.747406  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747410  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.747421  411209 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 23:10:59.747431  411209 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 23:10:59.747435  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747439  411209 command_runner.go:130] >       "size": "65258016",
	I0108 23:10:59.747446  411209 command_runner.go:130] >       "uid": null,
	I0108 23:10:59.747450  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.747457  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.747461  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.747465  411209 command_runner.go:130] >     },
	I0108 23:10:59.747471  411209 command_runner.go:130] >     {
	I0108 23:10:59.747477  411209 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 23:10:59.747481  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.747486  411209 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 23:10:59.747489  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747493  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.747500  411209 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 23:10:59.747507  411209 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 23:10:59.747511  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747518  411209 command_runner.go:130] >       "size": "31470524",
	I0108 23:10:59.747525  411209 command_runner.go:130] >       "uid": null,
	I0108 23:10:59.747529  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.747535  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.747540  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.747546  411209 command_runner.go:130] >     },
	I0108 23:10:59.747549  411209 command_runner.go:130] >     {
	I0108 23:10:59.747555  411209 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 23:10:59.747561  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.747575  411209 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 23:10:59.747582  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747586  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.747594  411209 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 23:10:59.747603  411209 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 23:10:59.747607  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747611  411209 command_runner.go:130] >       "size": "53621675",
	I0108 23:10:59.747618  411209 command_runner.go:130] >       "uid": null,
	I0108 23:10:59.747622  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.747626  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.747630  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.747634  411209 command_runner.go:130] >     },
	I0108 23:10:59.747637  411209 command_runner.go:130] >     {
	I0108 23:10:59.747643  411209 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 23:10:59.747650  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.747655  411209 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 23:10:59.747660  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747664  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.747673  411209 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 23:10:59.747682  411209 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 23:10:59.747692  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747699  411209 command_runner.go:130] >       "size": "295456551",
	I0108 23:10:59.747703  411209 command_runner.go:130] >       "uid": {
	I0108 23:10:59.747707  411209 command_runner.go:130] >         "value": "0"
	I0108 23:10:59.747712  411209 command_runner.go:130] >       },
	I0108 23:10:59.747717  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.747723  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.747727  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.747730  411209 command_runner.go:130] >     },
	I0108 23:10:59.747734  411209 command_runner.go:130] >     {
	I0108 23:10:59.747740  411209 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 23:10:59.747746  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.747751  411209 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 23:10:59.747757  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747762  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.747774  411209 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 23:10:59.747785  411209 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 23:10:59.747791  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747796  411209 command_runner.go:130] >       "size": "127226832",
	I0108 23:10:59.747800  411209 command_runner.go:130] >       "uid": {
	I0108 23:10:59.747806  411209 command_runner.go:130] >         "value": "0"
	I0108 23:10:59.747809  411209 command_runner.go:130] >       },
	I0108 23:10:59.747816  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.747821  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.747825  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.747831  411209 command_runner.go:130] >     },
	I0108 23:10:59.747834  411209 command_runner.go:130] >     {
	I0108 23:10:59.747841  411209 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 23:10:59.747847  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.747852  411209 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 23:10:59.747858  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747862  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.747871  411209 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 23:10:59.747880  411209 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 23:10:59.747886  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747896  411209 command_runner.go:130] >       "size": "123261750",
	I0108 23:10:59.747900  411209 command_runner.go:130] >       "uid": {
	I0108 23:10:59.747907  411209 command_runner.go:130] >         "value": "0"
	I0108 23:10:59.747911  411209 command_runner.go:130] >       },
	I0108 23:10:59.747917  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.747921  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.747925  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.747931  411209 command_runner.go:130] >     },
	I0108 23:10:59.747935  411209 command_runner.go:130] >     {
	I0108 23:10:59.747943  411209 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 23:10:59.747947  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.747953  411209 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 23:10:59.747956  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747961  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.747968  411209 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 23:10:59.747977  411209 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 23:10:59.747983  411209 command_runner.go:130] >       ],
	I0108 23:10:59.747989  411209 command_runner.go:130] >       "size": "74749335",
	I0108 23:10:59.747998  411209 command_runner.go:130] >       "uid": null,
	I0108 23:10:59.748002  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.748006  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.748014  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.748017  411209 command_runner.go:130] >     },
	I0108 23:10:59.748021  411209 command_runner.go:130] >     {
	I0108 23:10:59.748029  411209 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 23:10:59.748036  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.748041  411209 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 23:10:59.748047  411209 command_runner.go:130] >       ],
	I0108 23:10:59.748051  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.748072  411209 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 23:10:59.748082  411209 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 23:10:59.748087  411209 command_runner.go:130] >       ],
	I0108 23:10:59.748092  411209 command_runner.go:130] >       "size": "61551410",
	I0108 23:10:59.748098  411209 command_runner.go:130] >       "uid": {
	I0108 23:10:59.748102  411209 command_runner.go:130] >         "value": "0"
	I0108 23:10:59.748112  411209 command_runner.go:130] >       },
	I0108 23:10:59.748120  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.748124  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.748128  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.748134  411209 command_runner.go:130] >     },
	I0108 23:10:59.748140  411209 command_runner.go:130] >     {
	I0108 23:10:59.748153  411209 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 23:10:59.748160  411209 command_runner.go:130] >       "repoTags": [
	I0108 23:10:59.748165  411209 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 23:10:59.748171  411209 command_runner.go:130] >       ],
	I0108 23:10:59.748175  411209 command_runner.go:130] >       "repoDigests": [
	I0108 23:10:59.748184  411209 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 23:10:59.748191  411209 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 23:10:59.748197  411209 command_runner.go:130] >       ],
	I0108 23:10:59.748201  411209 command_runner.go:130] >       "size": "750414",
	I0108 23:10:59.748205  411209 command_runner.go:130] >       "uid": {
	I0108 23:10:59.748209  411209 command_runner.go:130] >         "value": "65535"
	I0108 23:10:59.748216  411209 command_runner.go:130] >       },
	I0108 23:10:59.748224  411209 command_runner.go:130] >       "username": "",
	I0108 23:10:59.748230  411209 command_runner.go:130] >       "spec": null,
	I0108 23:10:59.748234  411209 command_runner.go:130] >       "pinned": false
	I0108 23:10:59.748240  411209 command_runner.go:130] >     }
	I0108 23:10:59.748244  411209 command_runner.go:130] >   ]
	I0108 23:10:59.748247  411209 command_runner.go:130] > }
	I0108 23:10:59.750044  411209 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 23:10:59.750072  411209 cache_images.go:84] Images are preloaded, skipping loading
	I0108 23:10:59.750152  411209 ssh_runner.go:195] Run: crio config
	I0108 23:10:59.785523  411209 command_runner.go:130] ! time="2024-01-08 23:10:59.785143759Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0108 23:10:59.785564  411209 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 23:10:59.790742  411209 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 23:10:59.790765  411209 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 23:10:59.790772  411209 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 23:10:59.790783  411209 command_runner.go:130] > #
	I0108 23:10:59.790791  411209 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 23:10:59.790797  411209 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 23:10:59.790806  411209 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 23:10:59.790815  411209 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 23:10:59.790820  411209 command_runner.go:130] > # reload'.
	I0108 23:10:59.790829  411209 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 23:10:59.790836  411209 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 23:10:59.790844  411209 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 23:10:59.790852  411209 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 23:10:59.790856  411209 command_runner.go:130] > [crio]
	I0108 23:10:59.790865  411209 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 23:10:59.790872  411209 command_runner.go:130] > # containers images, in this directory.
	I0108 23:10:59.790884  411209 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0108 23:10:59.790893  411209 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 23:10:59.790898  411209 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0108 23:10:59.790906  411209 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 23:10:59.790914  411209 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 23:10:59.790919  411209 command_runner.go:130] > # storage_driver = "vfs"
	I0108 23:10:59.790927  411209 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 23:10:59.790932  411209 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 23:10:59.790939  411209 command_runner.go:130] > # storage_option = [
	I0108 23:10:59.790942  411209 command_runner.go:130] > # ]
	I0108 23:10:59.790953  411209 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 23:10:59.790962  411209 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 23:10:59.790967  411209 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 23:10:59.790974  411209 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 23:10:59.790983  411209 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 23:10:59.790988  411209 command_runner.go:130] > # always happen on a node reboot
	I0108 23:10:59.790994  411209 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 23:10:59.791008  411209 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 23:10:59.791017  411209 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 23:10:59.791030  411209 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 23:10:59.791040  411209 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 23:10:59.791050  411209 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 23:10:59.791058  411209 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 23:10:59.791065  411209 command_runner.go:130] > # internal_wipe = true
	I0108 23:10:59.791072  411209 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 23:10:59.791081  411209 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 23:10:59.791086  411209 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 23:10:59.791091  411209 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 23:10:59.791102  411209 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 23:10:59.791106  411209 command_runner.go:130] > [crio.api]
	I0108 23:10:59.791112  411209 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 23:10:59.791119  411209 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 23:10:59.791124  411209 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 23:10:59.791129  411209 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 23:10:59.791136  411209 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 23:10:59.791144  411209 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 23:10:59.791148  411209 command_runner.go:130] > # stream_port = "0"
	I0108 23:10:59.791156  411209 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 23:10:59.791160  411209 command_runner.go:130] > # stream_enable_tls = false
	I0108 23:10:59.791168  411209 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 23:10:59.791173  411209 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 23:10:59.791179  411209 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 23:10:59.791185  411209 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 23:10:59.791191  411209 command_runner.go:130] > # minutes.
	I0108 23:10:59.791195  411209 command_runner.go:130] > # stream_tls_cert = ""
	I0108 23:10:59.791201  411209 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 23:10:59.791212  411209 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 23:10:59.791218  411209 command_runner.go:130] > # stream_tls_key = ""
	I0108 23:10:59.791224  411209 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 23:10:59.791236  411209 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 23:10:59.791247  411209 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 23:10:59.791253  411209 command_runner.go:130] > # stream_tls_ca = ""
	I0108 23:10:59.791280  411209 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:10:59.791291  411209 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0108 23:10:59.791300  411209 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:10:59.791307  411209 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0108 23:10:59.791330  411209 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 23:10:59.791338  411209 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 23:10:59.791342  411209 command_runner.go:130] > [crio.runtime]
	I0108 23:10:59.791350  411209 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 23:10:59.791355  411209 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 23:10:59.791362  411209 command_runner.go:130] > # "nofile=1024:2048"
	I0108 23:10:59.791370  411209 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 23:10:59.791374  411209 command_runner.go:130] > # default_ulimits = [
	I0108 23:10:59.791379  411209 command_runner.go:130] > # ]
	I0108 23:10:59.791388  411209 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 23:10:59.791392  411209 command_runner.go:130] > # no_pivot = false
	I0108 23:10:59.791400  411209 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 23:10:59.791406  411209 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 23:10:59.791413  411209 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 23:10:59.791419  411209 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 23:10:59.791426  411209 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 23:10:59.791432  411209 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:10:59.791438  411209 command_runner.go:130] > # conmon = ""
	I0108 23:10:59.791443  411209 command_runner.go:130] > # Cgroup setting for conmon
	I0108 23:10:59.791452  411209 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 23:10:59.791456  411209 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 23:10:59.791464  411209 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 23:10:59.791469  411209 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 23:10:59.791478  411209 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:10:59.791482  411209 command_runner.go:130] > # conmon_env = [
	I0108 23:10:59.791488  411209 command_runner.go:130] > # ]
	I0108 23:10:59.791496  411209 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 23:10:59.791505  411209 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 23:10:59.791513  411209 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 23:10:59.791517  411209 command_runner.go:130] > # default_env = [
	I0108 23:10:59.791522  411209 command_runner.go:130] > # ]
	I0108 23:10:59.791533  411209 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 23:10:59.791540  411209 command_runner.go:130] > # selinux = false
	I0108 23:10:59.791546  411209 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 23:10:59.791554  411209 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 23:10:59.791560  411209 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 23:10:59.791566  411209 command_runner.go:130] > # seccomp_profile = ""
	I0108 23:10:59.791571  411209 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 23:10:59.791577  411209 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 23:10:59.791585  411209 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 23:10:59.791589  411209 command_runner.go:130] > # which might increase security.
	I0108 23:10:59.791596  411209 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0108 23:10:59.791603  411209 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 23:10:59.791611  411209 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 23:10:59.791619  411209 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 23:10:59.791630  411209 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 23:10:59.791638  411209 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:10:59.791642  411209 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 23:10:59.791648  411209 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 23:10:59.791655  411209 command_runner.go:130] > # the cgroup blockio controller.
	I0108 23:10:59.791659  411209 command_runner.go:130] > # blockio_config_file = ""
	I0108 23:10:59.791667  411209 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 23:10:59.791671  411209 command_runner.go:130] > # irqbalance daemon.
	I0108 23:10:59.791676  411209 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 23:10:59.791683  411209 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 23:10:59.791688  411209 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:10:59.791695  411209 command_runner.go:130] > # rdt_config_file = ""
	I0108 23:10:59.791700  411209 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 23:10:59.791706  411209 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 23:10:59.791712  411209 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 23:10:59.791719  411209 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 23:10:59.791725  411209 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 23:10:59.791735  411209 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 23:10:59.791742  411209 command_runner.go:130] > # will be added.
	I0108 23:10:59.791746  411209 command_runner.go:130] > # default_capabilities = [
	I0108 23:10:59.791752  411209 command_runner.go:130] > # 	"CHOWN",
	I0108 23:10:59.791756  411209 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 23:10:59.791760  411209 command_runner.go:130] > # 	"FSETID",
	I0108 23:10:59.791763  411209 command_runner.go:130] > # 	"FOWNER",
	I0108 23:10:59.791767  411209 command_runner.go:130] > # 	"SETGID",
	I0108 23:10:59.791771  411209 command_runner.go:130] > # 	"SETUID",
	I0108 23:10:59.791775  411209 command_runner.go:130] > # 	"SETPCAP",
	I0108 23:10:59.791779  411209 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 23:10:59.791785  411209 command_runner.go:130] > # 	"KILL",
	I0108 23:10:59.791789  411209 command_runner.go:130] > # ]
	I0108 23:10:59.791798  411209 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0108 23:10:59.791805  411209 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0108 23:10:59.791811  411209 command_runner.go:130] > # add_inheritable_capabilities = true
	I0108 23:10:59.791817  411209 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 23:10:59.791825  411209 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:10:59.791831  411209 command_runner.go:130] > # default_sysctls = [
	I0108 23:10:59.791837  411209 command_runner.go:130] > # ]
	I0108 23:10:59.791842  411209 command_runner.go:130] > # List of devices on the host that a
	I0108 23:10:59.791850  411209 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 23:10:59.791855  411209 command_runner.go:130] > # allowed_devices = [
	I0108 23:10:59.791861  411209 command_runner.go:130] > # 	"/dev/fuse",
	I0108 23:10:59.791864  411209 command_runner.go:130] > # ]
	I0108 23:10:59.791871  411209 command_runner.go:130] > # List of additional devices. specified as
	I0108 23:10:59.791921  411209 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 23:10:59.791930  411209 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 23:10:59.791936  411209 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:10:59.791940  411209 command_runner.go:130] > # additional_devices = [
	I0108 23:10:59.791946  411209 command_runner.go:130] > # ]
	I0108 23:10:59.791954  411209 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 23:10:59.791958  411209 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 23:10:59.791964  411209 command_runner.go:130] > # 	"/etc/cdi",
	I0108 23:10:59.791968  411209 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 23:10:59.791974  411209 command_runner.go:130] > # ]
	I0108 23:10:59.791982  411209 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 23:10:59.791990  411209 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 23:10:59.791995  411209 command_runner.go:130] > # Defaults to false.
	I0108 23:10:59.792006  411209 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 23:10:59.792014  411209 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 23:10:59.792023  411209 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 23:10:59.792027  411209 command_runner.go:130] > # hooks_dir = [
	I0108 23:10:59.792034  411209 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 23:10:59.792037  411209 command_runner.go:130] > # ]
	I0108 23:10:59.792043  411209 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 23:10:59.792051  411209 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 23:10:59.792057  411209 command_runner.go:130] > # its default mounts from the following two files:
	I0108 23:10:59.792062  411209 command_runner.go:130] > #
	I0108 23:10:59.792068  411209 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 23:10:59.792077  411209 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 23:10:59.792082  411209 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 23:10:59.792088  411209 command_runner.go:130] > #
	I0108 23:10:59.792094  411209 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 23:10:59.792104  411209 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 23:10:59.792112  411209 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 23:10:59.792120  411209 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 23:10:59.792123  411209 command_runner.go:130] > #
	I0108 23:10:59.792127  411209 command_runner.go:130] > # default_mounts_file = ""
	I0108 23:10:59.792134  411209 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 23:10:59.792143  411209 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 23:10:59.792150  411209 command_runner.go:130] > # pids_limit = 0
	I0108 23:10:59.792156  411209 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 23:10:59.792164  411209 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 23:10:59.792174  411209 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 23:10:59.792185  411209 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 23:10:59.792192  411209 command_runner.go:130] > # log_size_max = -1
	I0108 23:10:59.792198  411209 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 23:10:59.792205  411209 command_runner.go:130] > # log_to_journald = false
	I0108 23:10:59.792211  411209 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 23:10:59.792218  411209 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 23:10:59.792223  411209 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 23:10:59.792232  411209 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 23:10:59.792240  411209 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 23:10:59.792245  411209 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 23:10:59.792252  411209 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 23:10:59.792256  411209 command_runner.go:130] > # read_only = false
	I0108 23:10:59.792265  411209 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 23:10:59.792271  411209 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 23:10:59.792277  411209 command_runner.go:130] > # live configuration reload.
	I0108 23:10:59.792281  411209 command_runner.go:130] > # log_level = "info"
	I0108 23:10:59.792289  411209 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 23:10:59.792294  411209 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:10:59.792300  411209 command_runner.go:130] > # log_filter = ""
	I0108 23:10:59.792306  411209 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 23:10:59.792315  411209 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 23:10:59.792319  411209 command_runner.go:130] > # separated by comma.
	I0108 23:10:59.792322  411209 command_runner.go:130] > # uid_mappings = ""
	I0108 23:10:59.792328  411209 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 23:10:59.792336  411209 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 23:10:59.792343  411209 command_runner.go:130] > # separated by comma.
	I0108 23:10:59.792349  411209 command_runner.go:130] > # gid_mappings = ""
	I0108 23:10:59.792355  411209 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 23:10:59.792363  411209 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:10:59.792369  411209 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:10:59.792377  411209 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 23:10:59.792383  411209 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 23:10:59.792391  411209 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:10:59.792398  411209 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:10:59.792403  411209 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 23:10:59.792413  411209 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 23:10:59.792419  411209 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 23:10:59.792431  411209 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 23:10:59.792436  411209 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 23:10:59.792442  411209 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 23:10:59.792452  411209 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 23:10:59.792458  411209 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 23:10:59.792463  411209 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 23:10:59.792473  411209 command_runner.go:130] > # drop_infra_ctr = true
	I0108 23:10:59.792479  411209 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 23:10:59.792487  411209 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 23:10:59.792494  411209 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 23:10:59.792500  411209 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 23:10:59.792510  411209 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 23:10:59.792518  411209 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 23:10:59.792523  411209 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 23:10:59.792530  411209 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 23:10:59.792536  411209 command_runner.go:130] > # pinns_path = ""
	I0108 23:10:59.792542  411209 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 23:10:59.792550  411209 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 23:10:59.792556  411209 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 23:10:59.792562  411209 command_runner.go:130] > # default_runtime = "runc"
	I0108 23:10:59.792568  411209 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 23:10:59.792577  411209 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 23:10:59.792586  411209 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 23:10:59.792593  411209 command_runner.go:130] > # creation as a file is not desired either.
	I0108 23:10:59.792603  411209 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 23:10:59.792610  411209 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 23:10:59.792615  411209 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 23:10:59.792620  411209 command_runner.go:130] > # ]
	I0108 23:10:59.792626  411209 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 23:10:59.792634  411209 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 23:10:59.792640  411209 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 23:10:59.792649  411209 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 23:10:59.792653  411209 command_runner.go:130] > #
	I0108 23:10:59.792657  411209 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 23:10:59.792665  411209 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 23:10:59.792669  411209 command_runner.go:130] > #  runtime_type = "oci"
	I0108 23:10:59.792678  411209 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 23:10:59.792685  411209 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 23:10:59.792689  411209 command_runner.go:130] > #  allowed_annotations = []
	I0108 23:10:59.792693  411209 command_runner.go:130] > # Where:
	I0108 23:10:59.792701  411209 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 23:10:59.792707  411209 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 23:10:59.792718  411209 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 23:10:59.792726  411209 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 23:10:59.792730  411209 command_runner.go:130] > #   in $PATH.
	I0108 23:10:59.792738  411209 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 23:10:59.792744  411209 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 23:10:59.792750  411209 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 23:10:59.792756  411209 command_runner.go:130] > #   state.
	I0108 23:10:59.792763  411209 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 23:10:59.792771  411209 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 23:10:59.792777  411209 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 23:10:59.792784  411209 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 23:10:59.792790  411209 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 23:10:59.792799  411209 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 23:10:59.792804  411209 command_runner.go:130] > #   The currently recognized values are:
	I0108 23:10:59.792812  411209 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 23:10:59.792821  411209 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 23:10:59.792827  411209 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 23:10:59.792834  411209 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 23:10:59.792843  411209 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 23:10:59.792852  411209 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 23:10:59.792858  411209 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 23:10:59.792867  411209 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 23:10:59.792872  411209 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 23:10:59.792878  411209 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 23:10:59.792884  411209 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0108 23:10:59.792890  411209 command_runner.go:130] > runtime_type = "oci"
	I0108 23:10:59.792894  411209 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 23:10:59.792901  411209 command_runner.go:130] > runtime_config_path = ""
	I0108 23:10:59.792905  411209 command_runner.go:130] > monitor_path = ""
	I0108 23:10:59.792909  411209 command_runner.go:130] > monitor_cgroup = ""
	I0108 23:10:59.792915  411209 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 23:10:59.792972  411209 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 23:10:59.792980  411209 command_runner.go:130] > # running containers
	I0108 23:10:59.792984  411209 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 23:10:59.792990  411209 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 23:10:59.792997  411209 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 23:10:59.793011  411209 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 23:10:59.793019  411209 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 23:10:59.793023  411209 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 23:10:59.793028  411209 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 23:10:59.793034  411209 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 23:10:59.793039  411209 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 23:10:59.793046  411209 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 23:10:59.793052  411209 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 23:10:59.793063  411209 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 23:10:59.793071  411209 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 23:10:59.793078  411209 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 23:10:59.793088  411209 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 23:10:59.793096  411209 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 23:10:59.793105  411209 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 23:10:59.793115  411209 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 23:10:59.793121  411209 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 23:10:59.793127  411209 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 23:10:59.793133  411209 command_runner.go:130] > # Example:
	I0108 23:10:59.793140  411209 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 23:10:59.793147  411209 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 23:10:59.793153  411209 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 23:10:59.793160  411209 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 23:10:59.793164  411209 command_runner.go:130] > # cpuset = 0
	I0108 23:10:59.793170  411209 command_runner.go:130] > # cpushares = "0-1"
	I0108 23:10:59.793174  411209 command_runner.go:130] > # Where:
	I0108 23:10:59.793179  411209 command_runner.go:130] > # The workload name is workload-type.
	I0108 23:10:59.793188  411209 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 23:10:59.793196  411209 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 23:10:59.793202  411209 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 23:10:59.793212  411209 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 23:10:59.793220  411209 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 23:10:59.793226  411209 command_runner.go:130] > # 
	I0108 23:10:59.793232  411209 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 23:10:59.793236  411209 command_runner.go:130] > #
	I0108 23:10:59.793242  411209 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 23:10:59.793251  411209 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 23:10:59.793259  411209 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 23:10:59.793268  411209 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 23:10:59.793273  411209 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 23:10:59.793279  411209 command_runner.go:130] > [crio.image]
	I0108 23:10:59.793285  411209 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 23:10:59.793292  411209 command_runner.go:130] > # default_transport = "docker://"
	I0108 23:10:59.793298  411209 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 23:10:59.793306  411209 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:10:59.793311  411209 command_runner.go:130] > # global_auth_file = ""
	I0108 23:10:59.793316  411209 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 23:10:59.793321  411209 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:10:59.793328  411209 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 23:10:59.793335  411209 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 23:10:59.793343  411209 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:10:59.793348  411209 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:10:59.793355  411209 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 23:10:59.793360  411209 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 23:10:59.793368  411209 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 23:10:59.793376  411209 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 23:10:59.793385  411209 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 23:10:59.793389  411209 command_runner.go:130] > # pause_command = "/pause"
	I0108 23:10:59.793397  411209 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 23:10:59.793404  411209 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 23:10:59.793410  411209 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 23:10:59.793416  411209 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 23:10:59.793424  411209 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 23:10:59.793428  411209 command_runner.go:130] > # signature_policy = ""
	I0108 23:10:59.793439  411209 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 23:10:59.793448  411209 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 23:10:59.793452  411209 command_runner.go:130] > # changing them here.
	I0108 23:10:59.793460  411209 command_runner.go:130] > # insecure_registries = [
	I0108 23:10:59.793465  411209 command_runner.go:130] > # ]
	I0108 23:10:59.793471  411209 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 23:10:59.793479  411209 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 23:10:59.793484  411209 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 23:10:59.793491  411209 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 23:10:59.793498  411209 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 23:10:59.793508  411209 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 23:10:59.793514  411209 command_runner.go:130] > # CNI plugins.
	I0108 23:10:59.793518  411209 command_runner.go:130] > [crio.network]
	I0108 23:10:59.793527  411209 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 23:10:59.793535  411209 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 23:10:59.793540  411209 command_runner.go:130] > # cni_default_network = ""
	I0108 23:10:59.793548  411209 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 23:10:59.793552  411209 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 23:10:59.793560  411209 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 23:10:59.793564  411209 command_runner.go:130] > # plugin_dirs = [
	I0108 23:10:59.793571  411209 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 23:10:59.793574  411209 command_runner.go:130] > # ]
	I0108 23:10:59.793582  411209 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 23:10:59.793586  411209 command_runner.go:130] > [crio.metrics]
	I0108 23:10:59.793591  411209 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 23:10:59.793595  411209 command_runner.go:130] > # enable_metrics = false
	I0108 23:10:59.793600  411209 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 23:10:59.793609  411209 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 23:10:59.793616  411209 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 23:10:59.793624  411209 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 23:10:59.793630  411209 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 23:10:59.793636  411209 command_runner.go:130] > # metrics_collectors = [
	I0108 23:10:59.793640  411209 command_runner.go:130] > # 	"operations",
	I0108 23:10:59.793646  411209 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 23:10:59.793651  411209 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 23:10:59.793658  411209 command_runner.go:130] > # 	"operations_errors",
	I0108 23:10:59.793662  411209 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 23:10:59.793669  411209 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 23:10:59.793673  411209 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 23:10:59.793680  411209 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 23:10:59.793684  411209 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 23:10:59.793691  411209 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 23:10:59.793695  411209 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 23:10:59.793705  411209 command_runner.go:130] > # 	"containers_oom_total",
	I0108 23:10:59.793709  411209 command_runner.go:130] > # 	"containers_oom",
	I0108 23:10:59.793720  411209 command_runner.go:130] > # 	"processes_defunct",
	I0108 23:10:59.793727  411209 command_runner.go:130] > # 	"operations_total",
	I0108 23:10:59.793731  411209 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 23:10:59.793740  411209 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 23:10:59.793745  411209 command_runner.go:130] > # 	"operations_errors_total",
	I0108 23:10:59.793752  411209 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 23:10:59.793756  411209 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 23:10:59.793763  411209 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 23:10:59.793767  411209 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 23:10:59.793774  411209 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 23:10:59.793778  411209 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 23:10:59.793781  411209 command_runner.go:130] > # ]
	I0108 23:10:59.793786  411209 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 23:10:59.793793  411209 command_runner.go:130] > # metrics_port = 9090
	I0108 23:10:59.793801  411209 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 23:10:59.793810  411209 command_runner.go:130] > # metrics_socket = ""
	I0108 23:10:59.793815  411209 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 23:10:59.793824  411209 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 23:10:59.793832  411209 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 23:10:59.793839  411209 command_runner.go:130] > # certificate on any modification event.
	I0108 23:10:59.793843  411209 command_runner.go:130] > # metrics_cert = ""
	I0108 23:10:59.793851  411209 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 23:10:59.793856  411209 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 23:10:59.793862  411209 command_runner.go:130] > # metrics_key = ""
	I0108 23:10:59.793867  411209 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 23:10:59.793873  411209 command_runner.go:130] > [crio.tracing]
	I0108 23:10:59.793879  411209 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 23:10:59.793883  411209 command_runner.go:130] > # enable_tracing = false
	I0108 23:10:59.793890  411209 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 23:10:59.793898  411209 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 23:10:59.793903  411209 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 23:10:59.793909  411209 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 23:10:59.793915  411209 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 23:10:59.793921  411209 command_runner.go:130] > [crio.stats]
	I0108 23:10:59.793927  411209 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 23:10:59.793935  411209 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 23:10:59.793942  411209 command_runner.go:130] > # stats_collection_period = 0
	I0108 23:10:59.794063  411209 cni.go:84] Creating CNI manager for ""
	I0108 23:10:59.794079  411209 cni.go:136] 1 nodes found, recommending kindnet
	I0108 23:10:59.794098  411209 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 23:10:59.794117  411209 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-659947 NodeName:multinode-659947 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 23:10:59.794251  411209 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-659947"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 23:10:59.794309  411209 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-659947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-659947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 23:10:59.794360  411209 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 23:10:59.801928  411209 command_runner.go:130] > kubeadm
	I0108 23:10:59.801946  411209 command_runner.go:130] > kubectl
	I0108 23:10:59.801950  411209 command_runner.go:130] > kubelet
	I0108 23:10:59.802691  411209 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 23:10:59.802758  411209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 23:10:59.810766  411209 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0108 23:10:59.827110  411209 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 23:10:59.843934  411209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0108 23:10:59.861935  411209 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 23:10:59.865585  411209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:10:59.875908  411209 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947 for IP: 192.168.58.2
	I0108 23:10:59.875942  411209 certs.go:190] acquiring lock for shared ca certs: {Name:mka0fb25b2b3d7c6ea0a3bf3a5eb1e0289391c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:10:59.876074  411209 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key
	I0108 23:10:59.876110  411209 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key
	I0108 23:10:59.876155  411209 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.key
	I0108 23:10:59.876168  411209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.crt with IP's: []
	I0108 23:11:00.022485  411209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.crt ...
	I0108 23:11:00.022524  411209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.crt: {Name:mk7626ff241aa45839dcde93c9aa0aa8fd417e69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:11:00.022712  411209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.key ...
	I0108 23:11:00.022725  411209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.key: {Name:mk0300f8485e8ffb057b57d433722f9eca173acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:11:00.022796  411209 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.key.cee25041
	I0108 23:11:00.022810  411209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 23:11:00.148819  411209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.crt.cee25041 ...
	I0108 23:11:00.148864  411209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.crt.cee25041: {Name:mkfe573bc9fa89ec9ad702bd9c12cc3a7e19f075 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:11:00.149050  411209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.key.cee25041 ...
	I0108 23:11:00.149065  411209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.key.cee25041: {Name:mk4bdd4ca79fbd3d363be09c1639f758a5bd6cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:11:00.149135  411209 certs.go:337] copying /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.crt
	I0108 23:11:00.149226  411209 certs.go:341] copying /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.key
	I0108 23:11:00.149282  411209 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/proxy-client.key
	I0108 23:11:00.149297  411209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/proxy-client.crt with IP's: []
	I0108 23:11:00.223141  411209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/proxy-client.crt ...
	I0108 23:11:00.223178  411209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/proxy-client.crt: {Name:mkc2125f88c1fb8fe556df8c442115d8f5eef96c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:11:00.223376  411209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/proxy-client.key ...
	I0108 23:11:00.223389  411209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/proxy-client.key: {Name:mk2203bd5ee63f8b90534a7e24a0d3bbb9a60bb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:11:00.223490  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 23:11:00.223511  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 23:11:00.223521  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 23:11:00.223531  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 23:11:00.223549  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 23:11:00.223562  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 23:11:00.223575  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 23:11:00.223586  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 23:11:00.223634  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384.pem (1338 bytes)
	W0108 23:11:00.223669  411209 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384_empty.pem, impossibly tiny 0 bytes
	I0108 23:11:00.223681  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 23:11:00.223703  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem (1082 bytes)
	I0108 23:11:00.223728  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem (1123 bytes)
	I0108 23:11:00.223757  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem (1679 bytes)
	I0108 23:11:00.223793  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem (1708 bytes)
	I0108 23:11:00.223817  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem -> /usr/share/ca-certificates/3283842.pem
	I0108 23:11:00.223830  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:11:00.223842  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384.pem -> /usr/share/ca-certificates/328384.pem
	I0108 23:11:00.224462  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 23:11:00.247706  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 23:11:00.270148  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 23:11:00.292128  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 23:11:00.315174  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 23:11:00.337365  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 23:11:00.359889  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 23:11:00.381965  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 23:11:00.404872  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem --> /usr/share/ca-certificates/3283842.pem (1708 bytes)
	I0108 23:11:00.427599  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 23:11:00.450504  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384.pem --> /usr/share/ca-certificates/328384.pem (1338 bytes)
	I0108 23:11:00.474849  411209 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 23:11:00.491508  411209 ssh_runner.go:195] Run: openssl version
	I0108 23:11:00.496393  411209 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0108 23:11:00.496625  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3283842.pem && ln -fs /usr/share/ca-certificates/3283842.pem /etc/ssl/certs/3283842.pem"
	I0108 23:11:00.505700  411209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3283842.pem
	I0108 23:11:00.509047  411209 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 22:58 /usr/share/ca-certificates/3283842.pem
	I0108 23:11:00.509107  411209 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 22:58 /usr/share/ca-certificates/3283842.pem
	I0108 23:11:00.509228  411209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3283842.pem
	I0108 23:11:00.515431  411209 command_runner.go:130] > 3ec20f2e
	I0108 23:11:00.515701  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3283842.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 23:11:00.524921  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 23:11:00.533767  411209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:11:00.537059  411209 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:11:00.537100  411209 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:11:00.537146  411209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:11:00.543347  411209 command_runner.go:130] > b5213941
	I0108 23:11:00.543545  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 23:11:00.552439  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/328384.pem && ln -fs /usr/share/ca-certificates/328384.pem /etc/ssl/certs/328384.pem"
	I0108 23:11:00.561381  411209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/328384.pem
	I0108 23:11:00.564874  411209 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 22:58 /usr/share/ca-certificates/328384.pem
	I0108 23:11:00.564924  411209 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 22:58 /usr/share/ca-certificates/328384.pem
	I0108 23:11:00.564967  411209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/328384.pem
	I0108 23:11:00.571218  411209 command_runner.go:130] > 51391683
	I0108 23:11:00.571507  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/328384.pem /etc/ssl/certs/51391683.0"
	I0108 23:11:00.580370  411209 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 23:11:00.583607  411209 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:11:00.583654  411209 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:11:00.583707  411209 kubeadm.go:404] StartCluster: {Name:multinode-659947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-659947 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:11:00.583806  411209 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 23:11:00.583850  411209 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 23:11:00.617792  411209 cri.go:89] found id: ""
	I0108 23:11:00.617876  411209 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 23:11:00.626444  411209 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0108 23:11:00.626476  411209 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0108 23:11:00.626483  411209 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0108 23:11:00.626559  411209 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 23:11:00.635009  411209 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 23:11:00.635084  411209 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 23:11:00.642369  411209 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 23:11:00.642393  411209 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 23:11:00.642406  411209 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 23:11:00.642417  411209 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 23:11:00.643084  411209 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 23:11:00.643127  411209 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 23:11:00.687961  411209 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 23:11:00.688030  411209 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0108 23:11:00.688109  411209 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 23:11:00.688125  411209 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 23:11:00.724774  411209 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 23:11:00.724808  411209 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0108 23:11:00.724881  411209 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 23:11:00.724893  411209 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 23:11:00.724940  411209 kubeadm.go:322] OS: Linux
	I0108 23:11:00.724949  411209 command_runner.go:130] > OS: Linux
	I0108 23:11:00.725017  411209 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 23:11:00.725027  411209 command_runner.go:130] > CGROUPS_CPU: enabled
	I0108 23:11:00.725143  411209 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 23:11:00.725176  411209 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0108 23:11:00.725238  411209 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 23:11:00.725251  411209 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0108 23:11:00.725322  411209 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 23:11:00.725335  411209 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0108 23:11:00.725401  411209 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 23:11:00.725422  411209 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0108 23:11:00.725510  411209 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 23:11:00.725520  411209 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0108 23:11:00.725580  411209 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0108 23:11:00.725593  411209 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0108 23:11:00.725669  411209 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0108 23:11:00.725678  411209 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0108 23:11:00.725752  411209 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0108 23:11:00.725773  411209 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0108 23:11:00.789556  411209 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 23:11:00.789585  411209 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 23:11:00.789679  411209 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 23:11:00.789690  411209 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 23:11:00.789776  411209 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 23:11:00.789786  411209 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 23:11:00.986718  411209 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 23:11:00.989940  411209 out.go:204]   - Generating certificates and keys ...
	I0108 23:11:00.986833  411209 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 23:11:00.990063  411209 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 23:11:00.990086  411209 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 23:11:00.990163  411209 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 23:11:00.990172  411209 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 23:11:01.089070  411209 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 23:11:01.089117  411209 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 23:11:01.272665  411209 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 23:11:01.272705  411209 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0108 23:11:01.376850  411209 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 23:11:01.376882  411209 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0108 23:11:01.450969  411209 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 23:11:01.451004  411209 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0108 23:11:01.522148  411209 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 23:11:01.522184  411209 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0108 23:11:01.522341  411209 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-659947] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 23:11:01.522354  411209 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-659947] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 23:11:01.615364  411209 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 23:11:01.615394  411209 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0108 23:11:01.615537  411209 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-659947] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 23:11:01.615569  411209 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-659947] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 23:11:01.814807  411209 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 23:11:01.814839  411209 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 23:11:02.147361  411209 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 23:11:02.147418  411209 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 23:11:02.229370  411209 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 23:11:02.229412  411209 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0108 23:11:02.230701  411209 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 23:11:02.230729  411209 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 23:11:02.302199  411209 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 23:11:02.302238  411209 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 23:11:02.598117  411209 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 23:11:02.598149  411209 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 23:11:02.661233  411209 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 23:11:02.661270  411209 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 23:11:02.924889  411209 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 23:11:02.924925  411209 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 23:11:02.925285  411209 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 23:11:02.925312  411209 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 23:11:02.927631  411209 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 23:11:02.931015  411209 out.go:204]   - Booting up control plane ...
	I0108 23:11:02.927750  411209 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 23:11:02.931155  411209 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 23:11:02.931181  411209 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 23:11:02.931315  411209 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 23:11:02.931329  411209 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 23:11:02.931413  411209 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 23:11:02.931426  411209 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 23:11:02.939478  411209 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 23:11:02.939492  411209 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 23:11:02.940173  411209 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 23:11:02.940190  411209 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 23:11:02.940231  411209 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 23:11:02.940243  411209 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 23:11:03.014419  411209 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 23:11:03.014463  411209 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 23:11:08.016861  411209 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002478 seconds
	I0108 23:11:08.016897  411209 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002478 seconds
	I0108 23:11:08.017024  411209 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 23:11:08.017033  411209 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 23:11:08.030130  411209 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 23:11:08.030152  411209 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 23:11:08.551019  411209 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 23:11:08.551058  411209 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0108 23:11:08.551292  411209 kubeadm.go:322] [mark-control-plane] Marking the node multinode-659947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 23:11:08.551311  411209 command_runner.go:130] > [mark-control-plane] Marking the node multinode-659947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 23:11:09.061474  411209 kubeadm.go:322] [bootstrap-token] Using token: h5ie0o.y0cuajt8qvkp5978
	I0108 23:11:09.063016  411209 out.go:204]   - Configuring RBAC rules ...
	I0108 23:11:09.061545  411209 command_runner.go:130] > [bootstrap-token] Using token: h5ie0o.y0cuajt8qvkp5978
	I0108 23:11:09.063145  411209 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 23:11:09.063156  411209 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 23:11:09.067464  411209 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 23:11:09.067505  411209 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 23:11:09.073844  411209 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 23:11:09.073873  411209 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 23:11:09.077650  411209 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 23:11:09.077657  411209 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 23:11:09.080427  411209 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 23:11:09.080454  411209 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 23:11:09.083467  411209 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 23:11:09.083492  411209 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 23:11:09.093270  411209 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 23:11:09.093286  411209 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 23:11:09.322127  411209 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 23:11:09.322163  411209 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 23:11:09.472502  411209 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 23:11:09.472541  411209 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 23:11:09.474350  411209 kubeadm.go:322] 
	I0108 23:11:09.474449  411209 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 23:11:09.474464  411209 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0108 23:11:09.474477  411209 kubeadm.go:322] 
	I0108 23:11:09.474632  411209 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 23:11:09.474659  411209 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0108 23:11:09.474667  411209 kubeadm.go:322] 
	I0108 23:11:09.474729  411209 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 23:11:09.474748  411209 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0108 23:11:09.474826  411209 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 23:11:09.474837  411209 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 23:11:09.474894  411209 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 23:11:09.474905  411209 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 23:11:09.474911  411209 kubeadm.go:322] 
	I0108 23:11:09.474974  411209 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 23:11:09.474985  411209 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0108 23:11:09.474990  411209 kubeadm.go:322] 
	I0108 23:11:09.475050  411209 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 23:11:09.475062  411209 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 23:11:09.475067  411209 kubeadm.go:322] 
	I0108 23:11:09.475130  411209 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 23:11:09.475140  411209 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0108 23:11:09.475228  411209 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 23:11:09.475241  411209 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 23:11:09.475341  411209 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 23:11:09.475350  411209 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 23:11:09.475356  411209 kubeadm.go:322] 
	I0108 23:11:09.475464  411209 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 23:11:09.475475  411209 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0108 23:11:09.475565  411209 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 23:11:09.475615  411209 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0108 23:11:09.475663  411209 kubeadm.go:322] 
	I0108 23:11:09.475802  411209 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token h5ie0o.y0cuajt8qvkp5978 \
	I0108 23:11:09.475820  411209 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token h5ie0o.y0cuajt8qvkp5978 \
	I0108 23:11:09.475967  411209 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f39001c59ef0575e1bc9780ffa85edc1681f20574bd7111835c2c8563c3cd8d \
	I0108 23:11:09.475976  411209 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1f39001c59ef0575e1bc9780ffa85edc1681f20574bd7111835c2c8563c3cd8d \
	I0108 23:11:09.475999  411209 kubeadm.go:322] 	--control-plane 
	I0108 23:11:09.476008  411209 command_runner.go:130] > 	--control-plane 
	I0108 23:11:09.476017  411209 kubeadm.go:322] 
	I0108 23:11:09.476153  411209 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 23:11:09.476184  411209 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0108 23:11:09.476191  411209 kubeadm.go:322] 
	I0108 23:11:09.476324  411209 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token h5ie0o.y0cuajt8qvkp5978 \
	I0108 23:11:09.476340  411209 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token h5ie0o.y0cuajt8qvkp5978 \
	I0108 23:11:09.476539  411209 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f39001c59ef0575e1bc9780ffa85edc1681f20574bd7111835c2c8563c3cd8d 
	I0108 23:11:09.476577  411209 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1f39001c59ef0575e1bc9780ffa85edc1681f20574bd7111835c2c8563c3cd8d 
	I0108 23:11:09.478799  411209 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 23:11:09.478856  411209 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 23:11:09.479014  411209 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 23:11:09.479048  411209 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 23:11:09.479078  411209 cni.go:84] Creating CNI manager for ""
	I0108 23:11:09.479094  411209 cni.go:136] 1 nodes found, recommending kindnet
	I0108 23:11:09.481001  411209 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 23:11:09.482269  411209 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 23:11:09.486167  411209 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 23:11:09.486202  411209 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I0108 23:11:09.486214  411209 command_runner.go:130] > Device: 37h/55d	Inode: 1048091     Links: 1
	I0108 23:11:09.486224  411209 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:11:09.486235  411209 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0108 23:11:09.486248  411209 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0108 23:11:09.486265  411209 command_runner.go:130] > Change: 2024-01-08 22:52:05.087229574 +0000
	I0108 23:11:09.486277  411209 command_runner.go:130] >  Birth: 2024-01-08 22:52:05.059227641 +0000
	I0108 23:11:09.486348  411209 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 23:11:09.486362  411209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 23:11:09.561973  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 23:11:10.151638  411209 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0108 23:11:10.160048  411209 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0108 23:11:10.167902  411209 command_runner.go:130] > serviceaccount/kindnet created
	I0108 23:11:10.178938  411209 command_runner.go:130] > daemonset.apps/kindnet created
	I0108 23:11:10.183851  411209 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 23:11:10.183950  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:10.183955  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-659947 minikube.k8s.io/updated_at=2024_01_08T23_11_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:10.191099  411209 command_runner.go:130] > -16
	I0108 23:11:10.191173  411209 ops.go:34] apiserver oom_adj: -16
	I0108 23:11:10.270855  411209 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0108 23:11:10.270928  411209 command_runner.go:130] > node/multinode-659947 labeled
	I0108 23:11:10.270984  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:10.336689  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:10.771390  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:10.836033  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:11.271707  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:11.336762  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:11.771370  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:11.835811  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:12.271608  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:12.336743  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:12.771146  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:12.833436  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:13.272044  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:13.336744  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:13.771337  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:13.835580  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:14.271203  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:14.336856  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:14.771423  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:14.834099  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:15.271946  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:15.335290  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:15.771604  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:15.834847  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:16.271652  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:16.334509  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:16.771356  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:16.836744  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:17.271416  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:17.336938  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:17.771588  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:17.835283  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:18.271877  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:18.336814  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:18.771526  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:18.832924  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:19.271820  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:19.333649  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:19.771095  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:19.835455  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:20.271592  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:20.336150  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:20.771850  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:20.839707  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:21.271226  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:21.340320  411209 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:11:21.771948  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:11:21.838408  411209 command_runner.go:130] > NAME      SECRETS   AGE
	I0108 23:11:21.838432  411209 command_runner.go:130] > default   0         0s
	I0108 23:11:21.841241  411209 kubeadm.go:1088] duration metric: took 11.6573672s to wait for elevateKubeSystemPrivileges.
	I0108 23:11:21.841281  411209 kubeadm.go:406] StartCluster complete in 21.257580319s
	I0108 23:11:21.841308  411209 settings.go:142] acquiring lock: {Name:mkc902113864abc3d31cd188d3cc72ba1bd92615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:11:21.841388  411209 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:11:21.842116  411209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-321683/kubeconfig: {Name:mkc128765c68b9b4bae543227dc1d65bab54c68e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:11:21.842398  411209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 23:11:21.842555  411209 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 23:11:21.842628  411209 addons.go:69] Setting storage-provisioner=true in profile "multinode-659947"
	I0108 23:11:21.842643  411209 config.go:182] Loaded profile config "multinode-659947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:11:21.842661  411209 addons.go:69] Setting default-storageclass=true in profile "multinode-659947"
	I0108 23:11:21.842678  411209 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-659947"
	I0108 23:11:21.842791  411209 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:11:21.842655  411209 addons.go:237] Setting addon storage-provisioner=true in "multinode-659947"
	I0108 23:11:21.842908  411209 host.go:66] Checking if "multinode-659947" exists ...
	I0108 23:11:21.843135  411209 cli_runner.go:164] Run: docker container inspect multinode-659947 --format={{.State.Status}}
	I0108 23:11:21.843130  411209 kapi.go:59] client config for multinode-659947: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.key", CAFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:11:21.843414  411209 cli_runner.go:164] Run: docker container inspect multinode-659947 --format={{.State.Status}}
	I0108 23:11:21.843931  411209 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 23:11:21.844312  411209 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 23:11:21.844332  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:21.844343  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:21.844352  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:21.854734  411209 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0108 23:11:21.854764  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:21.854777  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:21.854786  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:21.854795  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:21.854803  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:21.854811  411209 round_trippers.go:580]     Content-Length: 291
	I0108 23:11:21.854817  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:21 GMT
	I0108 23:11:21.854828  411209 round_trippers.go:580]     Audit-Id: a352739b-5197-40ee-8f09-36de4c0b8bda
	I0108 23:11:21.854854  411209 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1043e7d7-0de0-4829-9106-70235d9b6dea","resourceVersion":"267","creationTimestamp":"2024-01-08T23:11:09Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 23:11:21.855234  411209 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1043e7d7-0de0-4829-9106-70235d9b6dea","resourceVersion":"267","creationTimestamp":"2024-01-08T23:11:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 23:11:21.855319  411209 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 23:11:21.855331  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:21.855339  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:21.855347  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:21.855353  411209 round_trippers.go:473]     Content-Type: application/json
	I0108 23:11:21.864158  411209 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 23:11:21.864189  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:21.864201  411209 round_trippers.go:580]     Audit-Id: d538779f-27f8-4dcc-bbb2-a04762bd647c
	I0108 23:11:21.864209  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:21.864219  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:21.864227  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:21.864235  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:21.864244  411209 round_trippers.go:580]     Content-Length: 291
	I0108 23:11:21.864252  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:21 GMT
	I0108 23:11:21.864286  411209 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1043e7d7-0de0-4829-9106-70235d9b6dea","resourceVersion":"339","creationTimestamp":"2024-01-08T23:11:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 23:11:21.865138  411209 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:11:21.865408  411209 kapi.go:59] client config for multinode-659947: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.key", CAFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:11:21.865658  411209 addons.go:237] Setting addon default-storageclass=true in "multinode-659947"
	I0108 23:11:21.865685  411209 host.go:66] Checking if "multinode-659947" exists ...
	I0108 23:11:21.866021  411209 cli_runner.go:164] Run: docker container inspect multinode-659947 --format={{.State.Status}}
	I0108 23:11:21.870633  411209 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:11:21.872196  411209 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 23:11:21.872224  411209 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 23:11:21.872290  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:11:21.886158  411209 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 23:11:21.886193  411209 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 23:11:21.886257  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:11:21.893073  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa Username:docker}
	I0108 23:11:21.908622  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa Username:docker}
	I0108 23:11:21.945493  411209 command_runner.go:130] > apiVersion: v1
	I0108 23:11:21.945520  411209 command_runner.go:130] > data:
	I0108 23:11:21.945528  411209 command_runner.go:130] >   Corefile: |
	I0108 23:11:21.945534  411209 command_runner.go:130] >     .:53 {
	I0108 23:11:21.945541  411209 command_runner.go:130] >         errors
	I0108 23:11:21.945549  411209 command_runner.go:130] >         health {
	I0108 23:11:21.945557  411209 command_runner.go:130] >            lameduck 5s
	I0108 23:11:21.945563  411209 command_runner.go:130] >         }
	I0108 23:11:21.945570  411209 command_runner.go:130] >         ready
	I0108 23:11:21.945584  411209 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 23:11:21.945594  411209 command_runner.go:130] >            pods insecure
	I0108 23:11:21.945602  411209 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 23:11:21.945613  411209 command_runner.go:130] >            ttl 30
	I0108 23:11:21.945621  411209 command_runner.go:130] >         }
	I0108 23:11:21.945632  411209 command_runner.go:130] >         prometheus :9153
	I0108 23:11:21.945640  411209 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 23:11:21.945652  411209 command_runner.go:130] >            max_concurrent 1000
	I0108 23:11:21.945661  411209 command_runner.go:130] >         }
	I0108 23:11:21.945674  411209 command_runner.go:130] >         cache 30
	I0108 23:11:21.945683  411209 command_runner.go:130] >         loop
	I0108 23:11:21.945690  411209 command_runner.go:130] >         reload
	I0108 23:11:21.945700  411209 command_runner.go:130] >         loadbalance
	I0108 23:11:21.945710  411209 command_runner.go:130] >     }
	I0108 23:11:21.945730  411209 command_runner.go:130] > kind: ConfigMap
	I0108 23:11:21.945740  411209 command_runner.go:130] > metadata:
	I0108 23:11:21.945751  411209 command_runner.go:130] >   creationTimestamp: "2024-01-08T23:11:09Z"
	I0108 23:11:21.945760  411209 command_runner.go:130] >   name: coredns
	I0108 23:11:21.945770  411209 command_runner.go:130] >   namespace: kube-system
	I0108 23:11:21.945775  411209 command_runner.go:130] >   resourceVersion: "263"
	I0108 23:11:21.945786  411209 command_runner.go:130] >   uid: 84a00ed4-4c28-4617-8009-8e1c17a9ea5e
	I0108 23:11:21.949176  411209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 23:11:22.066466  411209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 23:11:22.068012  411209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 23:11:22.344684  411209 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 23:11:22.344779  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:22.344804  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:22.344825  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:22.352448  411209 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 23:11:22.352473  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:22.352484  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:22.352493  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:22.352500  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:22.352509  411209 round_trippers.go:580]     Content-Length: 291
	I0108 23:11:22.352517  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:22 GMT
	I0108 23:11:22.352524  411209 round_trippers.go:580]     Audit-Id: 6269294f-7685-4ce9-9571-d473037488b0
	I0108 23:11:22.352532  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:22.352853  411209 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1043e7d7-0de0-4829-9106-70235d9b6dea","resourceVersion":"354","creationTimestamp":"2024-01-08T23:11:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 23:11:22.352986  411209 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-659947" context rescaled to 1 replicas
	I0108 23:11:22.353022  411209 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 23:11:22.355995  411209 out.go:177] * Verifying Kubernetes components...
	I0108 23:11:22.357834  411209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:11:22.648909  411209 command_runner.go:130] > configmap/coredns replaced
	I0108 23:11:22.654419  411209 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0108 23:11:22.979182  411209 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0108 23:11:22.986005  411209 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0108 23:11:22.992878  411209 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 23:11:23.045770  411209 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 23:11:23.054132  411209 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0108 23:11:23.061723  411209 command_runner.go:130] > pod/storage-provisioner created
	I0108 23:11:23.067485  411209 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00097139s)
	I0108 23:11:23.067545  411209 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0108 23:11:23.067713  411209 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0108 23:11:23.067729  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:23.067739  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:23.067748  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:23.068129  411209 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:11:23.068471  411209 kapi.go:59] client config for multinode-659947: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.key", CAFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:11:23.068799  411209 node_ready.go:35] waiting up to 6m0s for node "multinode-659947" to be "Ready" ...
	I0108 23:11:23.068927  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:23.068946  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:23.068956  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:23.068964  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:23.069848  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:23.069876  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:23.069886  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:23.069896  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:23.069904  411209 round_trippers.go:580]     Content-Length: 1273
	I0108 23:11:23.069914  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:23 GMT
	I0108 23:11:23.069923  411209 round_trippers.go:580]     Audit-Id: b493db02-b0db-42d8-9f27-5b77c2871507
	I0108 23:11:23.069931  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:23.069941  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:23.069973  411209 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"406"},"items":[{"metadata":{"name":"standard","uid":"57d4d8ac-deea-455e-8d07-602f7c83f484","resourceVersion":"393","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 23:11:23.070371  411209 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"57d4d8ac-deea-455e-8d07-602f7c83f484","resourceVersion":"393","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 23:11:23.070421  411209 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 23:11:23.070433  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:23.070443  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:23.070452  411209 round_trippers.go:473]     Content-Type: application/json
	I0108 23:11:23.070462  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:23.070970  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:11:23.070990  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:23.070997  411209 round_trippers.go:580]     Audit-Id: 83885b95-fae1-47fe-a9c7-f9ff7f5e4e87
	I0108 23:11:23.071004  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:23.071009  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:23.071014  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:23.071019  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:23.071024  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:23 GMT
	I0108 23:11:23.071165  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:23.072710  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:23.072728  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:23.072739  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:23.072751  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:23.072759  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:23.072767  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:23.072772  411209 round_trippers.go:580]     Content-Length: 1220
	I0108 23:11:23.072780  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:23 GMT
	I0108 23:11:23.072787  411209 round_trippers.go:580]     Audit-Id: 74798012-c114-4041-baa3-41839bc504c9
	I0108 23:11:23.072822  411209 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"57d4d8ac-deea-455e-8d07-602f7c83f484","resourceVersion":"393","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 23:11:23.074698  411209 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 23:11:23.075942  411209 addons.go:508] enable addons completed in 1.233388906s: enabled=[storage-provisioner default-storageclass]
	I0108 23:11:23.569316  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:23.569340  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:23.569352  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:23.569362  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:23.571351  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:11:23.571373  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:23.571381  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:23.571387  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:23.571392  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:23 GMT
	I0108 23:11:23.571397  411209 round_trippers.go:580]     Audit-Id: 92b36c50-92a8-4324-879a-c51e4c3bdd91
	I0108 23:11:23.571402  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:23.571407  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:23.571541  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:24.069092  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:24.069119  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:24.069128  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:24.069134  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:24.071454  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:24.071480  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:24.071492  411209 round_trippers.go:580]     Audit-Id: 7830310d-a565-4ac7-b7d7-1e385f878e45
	I0108 23:11:24.071499  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:24.071508  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:24.071522  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:24.071532  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:24.071541  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:24 GMT
	I0108 23:11:24.071703  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:24.569273  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:24.569306  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:24.569316  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:24.569324  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:24.571661  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:24.571683  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:24.571695  411209 round_trippers.go:580]     Audit-Id: 8647207b-bb91-4f50-9a45-037d5760293e
	I0108 23:11:24.571705  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:24.571714  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:24.571722  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:24.571731  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:24.571743  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:24 GMT
	I0108 23:11:24.571874  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:25.069449  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:25.069475  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:25.069497  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:25.069503  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:25.071751  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:25.071775  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:25.071783  411209 round_trippers.go:580]     Audit-Id: a63ff053-1934-493e-b169-dc6cac197691
	I0108 23:11:25.071788  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:25.071794  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:25.071800  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:25.071805  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:25.071812  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:25 GMT
	I0108 23:11:25.071984  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:25.072361  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:25.569658  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:25.569683  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:25.569692  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:25.569699  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:25.572086  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:25.572106  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:25.572113  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:25 GMT
	I0108 23:11:25.572119  411209 round_trippers.go:580]     Audit-Id: 0b269a9f-e45a-4806-b4d1-29f07e0110f7
	I0108 23:11:25.572124  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:25.572130  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:25.572137  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:25.572145  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:25.572302  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:26.069944  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:26.069970  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:26.069979  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:26.069985  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:26.072322  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:26.072344  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:26.072351  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:26.072357  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:26.072362  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:26.072367  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:26.072372  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:26 GMT
	I0108 23:11:26.072377  411209 round_trippers.go:580]     Audit-Id: 5abd7811-d6c0-4a44-9c00-38a9c39b24b6
	I0108 23:11:26.072490  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:26.569063  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:26.569093  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:26.569101  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:26.569108  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:26.571466  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:26.571497  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:26.571512  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:26.571520  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:26.571529  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:26 GMT
	I0108 23:11:26.571537  411209 round_trippers.go:580]     Audit-Id: 46345155-eb9d-49d2-b1f9-c92f2abfaea7
	I0108 23:11:26.571545  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:26.571558  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:26.571769  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:27.069385  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:27.069412  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:27.069421  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:27.069427  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:27.071748  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:27.071770  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:27.071780  411209 round_trippers.go:580]     Audit-Id: b60f7aa1-8b50-4d4a-81cf-92b8374fe0a1
	I0108 23:11:27.071791  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:27.071798  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:27.071805  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:27.071813  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:27.071822  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:27 GMT
	I0108 23:11:27.071933  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:27.569838  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:27.569865  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:27.569879  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:27.569888  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:27.572104  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:27.572132  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:27.572143  411209 round_trippers.go:580]     Audit-Id: 4e434f81-3f39-4ffc-b926-04f487e128a3
	I0108 23:11:27.572152  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:27.572161  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:27.572170  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:27.572179  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:27.572191  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:27 GMT
	I0108 23:11:27.572356  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:27.572789  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:28.069933  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:28.069954  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:28.069962  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:28.069968  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:28.072259  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:28.072285  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:28.072297  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:28.072306  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:28 GMT
	I0108 23:11:28.072314  411209 round_trippers.go:580]     Audit-Id: 5c6f7c0a-d5d1-40e6-ad0b-2ebe16219e80
	I0108 23:11:28.072323  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:28.072335  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:28.072346  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:28.072483  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:28.570032  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:28.570058  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:28.570066  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:28.570072  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:28.572446  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:28.572486  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:28.572498  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:28.572508  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:28.572519  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:28.572527  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:28 GMT
	I0108 23:11:28.572534  411209 round_trippers.go:580]     Audit-Id: be590715-0987-4efa-b71d-952325fc2a5c
	I0108 23:11:28.572543  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:28.572667  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:29.069146  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:29.069173  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:29.069182  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:29.069188  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:29.071723  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:29.071751  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:29.071762  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:29.071772  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:29.071781  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:29.071791  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:29 GMT
	I0108 23:11:29.071804  411209 round_trippers.go:580]     Audit-Id: 79d35d6d-4c5e-419d-a992-cff9c8b02d9a
	I0108 23:11:29.071817  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:29.072020  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:29.569577  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:29.569602  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:29.569611  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:29.569617  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:29.571972  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:29.571993  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:29.572008  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:29.572015  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:29.572033  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:29.572041  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:29 GMT
	I0108 23:11:29.572048  411209 round_trippers.go:580]     Audit-Id: eafdad12-d774-4528-bcd6-da81c12f68e8
	I0108 23:11:29.572057  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:29.572241  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:30.069545  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:30.069572  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:30.069582  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:30.069588  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:30.072132  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:30.072152  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:30.072160  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:30.072165  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:30.072171  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:30.072176  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:30.072181  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:30 GMT
	I0108 23:11:30.072186  411209 round_trippers.go:580]     Audit-Id: 3243fcb6-064c-4fc4-92c6-2d9a2ddcbdc9
	I0108 23:11:30.072375  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:30.072719  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:30.570045  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:30.570068  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:30.570077  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:30.570083  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:30.572471  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:30.572496  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:30.572507  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:30 GMT
	I0108 23:11:30.572516  411209 round_trippers.go:580]     Audit-Id: 495c58b7-c516-4e4a-b5a0-1f49e2ddd3ea
	I0108 23:11:30.572524  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:30.572530  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:30.572536  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:30.572541  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:30.572760  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:31.069260  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:31.069294  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:31.069306  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:31.069314  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:31.071802  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:31.071823  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:31.071830  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:31.071836  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:31 GMT
	I0108 23:11:31.071841  411209 round_trippers.go:580]     Audit-Id: 8897e9e8-8c9f-4e51-aba7-020a82255855
	I0108 23:11:31.071846  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:31.071851  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:31.071856  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:31.072039  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:31.569760  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:31.569789  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:31.569798  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:31.569804  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:31.572240  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:31.572263  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:31.572270  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:31.572276  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:31.572282  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:31.572287  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:31 GMT
	I0108 23:11:31.572292  411209 round_trippers.go:580]     Audit-Id: 188d51a5-650a-42e5-a1bf-b7ff419cd12e
	I0108 23:11:31.572297  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:31.572443  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:32.069281  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:32.069310  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:32.069322  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:32.069329  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:32.071792  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:32.071825  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:32.071834  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:32.071840  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:32.071846  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:32.071854  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:32.071873  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:32 GMT
	I0108 23:11:32.071882  411209 round_trippers.go:580]     Audit-Id: a446b7dd-fdf2-42e4-af2d-3dd3f8fc78ee
	I0108 23:11:32.072122  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:32.569856  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:32.569881  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:32.569893  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:32.569900  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:32.572254  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:32.572274  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:32.572281  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:32 GMT
	I0108 23:11:32.572287  411209 round_trippers.go:580]     Audit-Id: 2228bc86-256d-42a7-9019-1c65c023f4fc
	I0108 23:11:32.572293  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:32.572302  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:32.572310  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:32.572325  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:32.572495  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:32.572870  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:33.069101  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:33.069121  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:33.069129  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:33.069136  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:33.071416  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:33.071437  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:33.071444  411209 round_trippers.go:580]     Audit-Id: 3f649bca-7955-4ba7-961d-b3c23be3e5b7
	I0108 23:11:33.071450  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:33.071455  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:33.071460  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:33.071465  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:33.071473  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:33 GMT
	I0108 23:11:33.071664  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:33.569276  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:33.569301  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:33.569309  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:33.569315  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:33.571796  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:33.571892  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:33.571911  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:33.571924  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:33.571935  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:33 GMT
	I0108 23:11:33.571944  411209 round_trippers.go:580]     Audit-Id: 090aed22-34d8-4f02-99b7-0aedc27a50bb
	I0108 23:11:33.571956  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:33.571969  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:33.572124  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:34.069507  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:34.069534  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:34.069542  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:34.069548  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:34.072007  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:34.072031  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:34.072041  411209 round_trippers.go:580]     Audit-Id: 083c3ebd-80c3-435d-8290-28f52822b745
	I0108 23:11:34.072049  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:34.072059  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:34.072077  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:34.072086  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:34.072097  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:34 GMT
	I0108 23:11:34.072203  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:34.569805  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:34.569848  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:34.569862  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:34.569870  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:34.572105  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:34.572129  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:34.572136  411209 round_trippers.go:580]     Audit-Id: 37fcfd15-f1bb-469d-a38b-b62f3a8c1417
	I0108 23:11:34.572142  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:34.572147  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:34.572153  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:34.572158  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:34.572163  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:34 GMT
	I0108 23:11:34.572303  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:35.070072  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:35.070096  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:35.070105  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:35.070111  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:35.072440  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:35.072462  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:35.072473  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:35.072481  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:35.072489  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:35.072498  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:35 GMT
	I0108 23:11:35.072507  411209 round_trippers.go:580]     Audit-Id: c9ace32b-964d-49d6-a5c0-03b746273bd3
	I0108 23:11:35.072514  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:35.072651  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:35.072956  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:35.569217  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:35.569247  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:35.569261  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:35.569285  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:35.571718  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:35.571738  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:35.571745  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:35.571750  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:35.571756  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:35.571763  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:35.571771  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:35 GMT
	I0108 23:11:35.571778  411209 round_trippers.go:580]     Audit-Id: 01ea35c5-9ff2-44a7-9d35-23fdcdde137a
	I0108 23:11:35.571921  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:36.069117  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:36.069142  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:36.069150  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:36.069156  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:36.071615  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:36.071639  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:36.071652  411209 round_trippers.go:580]     Audit-Id: 6145c2f2-d727-4a3b-8bbc-adc3110cfaa3
	I0108 23:11:36.071663  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:36.071672  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:36.071685  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:36.071693  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:36.071703  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:36 GMT
	I0108 23:11:36.071826  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:36.569439  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:36.569474  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:36.569485  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:36.569494  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:36.571884  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:36.571913  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:36.571922  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:36.571930  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:36.571939  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:36.571946  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:36 GMT
	I0108 23:11:36.571954  411209 round_trippers.go:580]     Audit-Id: 08ce5568-486c-4954-9f6b-2aa642145e04
	I0108 23:11:36.571962  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:36.572142  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:37.069850  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:37.069875  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:37.069884  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:37.069890  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:37.072182  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:37.072205  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:37.072216  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:37 GMT
	I0108 23:11:37.072225  411209 round_trippers.go:580]     Audit-Id: 33ac0113-f6bb-4452-884b-0a612bb95f52
	I0108 23:11:37.072238  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:37.072249  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:37.072264  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:37.072270  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:37.072403  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:37.569392  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:37.569424  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:37.569433  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:37.569441  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:37.571749  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:37.571772  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:37.571782  411209 round_trippers.go:580]     Audit-Id: 43883645-8937-4668-9405-da7ad017a498
	I0108 23:11:37.571789  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:37.571795  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:37.571803  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:37.571810  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:37.571819  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:37 GMT
	I0108 23:11:37.571987  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:37.572320  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:38.069647  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:38.069673  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:38.069686  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:38.069696  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:38.072010  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:38.072029  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:38.072035  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:38.072041  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:38.072046  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:38.072051  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:38 GMT
	I0108 23:11:38.072056  411209 round_trippers.go:580]     Audit-Id: 6573cd05-6ac2-4696-940d-8530d905986a
	I0108 23:11:38.072061  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:38.072230  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:38.569917  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:38.569971  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:38.569983  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:38.569991  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:38.572506  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:38.572530  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:38.572537  411209 round_trippers.go:580]     Audit-Id: 1093c366-3f2f-4be4-b9f2-f41b18c0ab2b
	I0108 23:11:38.572543  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:38.572549  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:38.572554  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:38.572559  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:38.572564  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:38 GMT
	I0108 23:11:38.572774  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:39.069492  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:39.069521  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:39.069534  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:39.069543  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:39.071977  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:39.072001  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:39.072012  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:39 GMT
	I0108 23:11:39.072021  411209 round_trippers.go:580]     Audit-Id: 44d4672e-4999-4800-8c1a-9cf4945ece22
	I0108 23:11:39.072030  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:39.072039  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:39.072048  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:39.072060  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:39.072210  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:39.569804  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:39.569833  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:39.569842  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:39.569849  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:39.572704  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:39.572725  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:39.572734  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:39.572743  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:39.572772  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:39.572784  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:39 GMT
	I0108 23:11:39.572792  411209 round_trippers.go:580]     Audit-Id: 3decc13c-6d24-4d5f-9951-eeda166e1d32
	I0108 23:11:39.572803  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:39.572934  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:39.573246  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:40.069509  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:40.069533  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:40.069544  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:40.069573  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:40.071941  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:40.071972  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:40.071983  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:40.071990  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:40.071998  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:40.072005  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:40 GMT
	I0108 23:11:40.072014  411209 round_trippers.go:580]     Audit-Id: 6e2d4c57-ceab-4c87-8aa2-9835a0af4075
	I0108 23:11:40.072022  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:40.072161  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:40.569868  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:40.569899  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:40.569914  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:40.569922  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:40.572322  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:40.572341  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:40.572348  411209 round_trippers.go:580]     Audit-Id: 59e5dae9-6c85-47f3-83b2-2ba075ba1f46
	I0108 23:11:40.572356  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:40.572361  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:40.572366  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:40.572371  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:40.572376  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:40 GMT
	I0108 23:11:40.572645  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:41.069178  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:41.069207  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:41.069216  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:41.069223  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:41.071706  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:41.071730  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:41.071737  411209 round_trippers.go:580]     Audit-Id: 9bbc5290-582c-4873-80f8-b7e5376ba768
	I0108 23:11:41.071743  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:41.071748  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:41.071753  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:41.071761  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:41.071769  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:41 GMT
	I0108 23:11:41.071930  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:41.569405  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:41.569432  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:41.569441  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:41.569446  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:41.571749  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:41.571775  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:41.571786  411209 round_trippers.go:580]     Audit-Id: 18aa1987-fce0-4200-8a46-22a754102865
	I0108 23:11:41.571794  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:41.571800  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:41.571810  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:41.571819  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:41.571832  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:41 GMT
	I0108 23:11:41.571987  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:42.069796  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:42.069820  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:42.069828  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:42.069834  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:42.072212  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:42.072245  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:42.072256  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:42.072265  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:42.072274  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:42.072281  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:42.072291  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:42 GMT
	I0108 23:11:42.072296  411209 round_trippers.go:580]     Audit-Id: 97752260-d265-4749-9a16-10d02a5312ac
	I0108 23:11:42.072419  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:42.072754  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:42.569027  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:42.569049  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:42.569058  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:42.569065  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:42.571538  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:42.571565  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:42.571576  411209 round_trippers.go:580]     Audit-Id: 97e6bc87-ab55-44b7-9e5e-36b3356c95e5
	I0108 23:11:42.571583  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:42.571590  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:42.571598  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:42.571608  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:42.571616  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:42 GMT
	I0108 23:11:42.571738  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:43.069276  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:43.069301  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:43.069316  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:43.069322  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:43.071707  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:43.071730  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:43.071741  411209 round_trippers.go:580]     Audit-Id: 3c3b9265-fc48-4199-aef5-187f1be8edf3
	I0108 23:11:43.071749  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:43.071757  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:43.071764  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:43.071772  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:43.071783  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:43 GMT
	I0108 23:11:43.071919  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:43.569480  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:43.569506  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:43.569515  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:43.569521  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:43.571942  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:43.571961  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:43.571968  411209 round_trippers.go:580]     Audit-Id: e00f2277-e88c-4fc6-b622-e98a56f7e762
	I0108 23:11:43.571973  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:43.571979  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:43.571984  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:43.571989  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:43.571994  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:43 GMT
	I0108 23:11:43.572191  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:44.069917  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:44.069998  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:44.070007  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:44.070013  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:44.072287  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:44.072309  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:44.072317  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:44 GMT
	I0108 23:11:44.072322  411209 round_trippers.go:580]     Audit-Id: 1412feff-1892-40a9-b717-99a09fa9b9ce
	I0108 23:11:44.072328  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:44.072333  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:44.072340  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:44.072354  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:44.072493  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:44.072801  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:44.569710  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:44.569731  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:44.569740  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:44.569746  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:44.572065  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:44.572086  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:44.572093  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:44.572099  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:44 GMT
	I0108 23:11:44.572105  411209 round_trippers.go:580]     Audit-Id: 00dcde20-474c-42a6-9f47-9ab353dffc29
	I0108 23:11:44.572110  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:44.572117  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:44.572125  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:44.572303  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:45.069988  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:45.070016  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:45.070025  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:45.070031  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:45.072297  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:45.072317  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:45.072324  411209 round_trippers.go:580]     Audit-Id: 0b121ee2-abaf-448b-ae89-17d8cc05e75d
	I0108 23:11:45.072329  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:45.072335  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:45.072340  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:45.072345  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:45.072350  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:45 GMT
	I0108 23:11:45.072516  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:45.569138  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:45.569164  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:45.569173  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:45.569179  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:45.571587  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:45.571608  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:45.571616  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:45.571622  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:45.571627  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:45.571633  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:45 GMT
	I0108 23:11:45.571642  411209 round_trippers.go:580]     Audit-Id: 0612b2fa-3950-435f-a016-ddae5f89bde6
	I0108 23:11:45.571650  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:45.571857  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:46.069398  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:46.069424  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:46.069432  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:46.069439  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:46.071973  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:46.072002  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:46.072012  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:46.072021  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:46.072029  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:46.072040  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:46.072049  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:46 GMT
	I0108 23:11:46.072119  411209 round_trippers.go:580]     Audit-Id: ff31da13-ecd0-4b46-af8c-a7099a32d346
	I0108 23:11:46.072313  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:46.569877  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:46.569904  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:46.569925  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:46.569931  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:46.572326  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:46.572350  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:46.572358  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:46.572367  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:46.572376  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:46.572383  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:46 GMT
	I0108 23:11:46.572390  411209 round_trippers.go:580]     Audit-Id: fd5e210f-fd9e-424e-bf7b-e343927c2fc2
	I0108 23:11:46.572398  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:46.572513  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:46.572825  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:47.069126  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:47.069150  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:47.069158  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:47.069165  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:47.071771  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:47.071794  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:47.071802  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:47.071810  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:47 GMT
	I0108 23:11:47.071821  411209 round_trippers.go:580]     Audit-Id: 7a2d50b2-8520-4850-aaf8-81d60a00fa7c
	I0108 23:11:47.071831  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:47.071841  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:47.071851  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:47.072018  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:47.569060  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:47.569084  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:47.569093  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:47.569100  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:47.571622  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:47.571648  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:47.571664  411209 round_trippers.go:580]     Audit-Id: c2800519-36b8-47b3-8f83-8d918c981f1a
	I0108 23:11:47.571672  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:47.571680  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:47.571687  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:47.571699  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:47.571714  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:47 GMT
	I0108 23:11:47.571847  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:48.069406  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:48.069435  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:48.069443  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:48.069450  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:48.071987  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:48.072013  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:48.072024  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:48 GMT
	I0108 23:11:48.072033  411209 round_trippers.go:580]     Audit-Id: 29c1e001-ef66-420c-a674-cbd24cb415c6
	I0108 23:11:48.072040  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:48.072048  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:48.072056  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:48.072066  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:48.072203  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:48.569845  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:48.569871  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:48.569880  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:48.569886  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:48.572209  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:48.572232  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:48.572239  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:48.572246  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:48.572254  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:48.572262  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:48 GMT
	I0108 23:11:48.572271  411209 round_trippers.go:580]     Audit-Id: 34ff253c-f139-4137-9077-bc76a9fad09c
	I0108 23:11:48.572280  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:48.572437  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:49.069087  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:49.069115  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:49.069123  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:49.069130  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:49.071524  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:49.071550  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:49.071560  411209 round_trippers.go:580]     Audit-Id: 0bbb975b-d85b-46a5-b0a9-03fab8e198d1
	I0108 23:11:49.071570  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:49.071578  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:49.071587  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:49.071594  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:49.071602  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:49 GMT
	I0108 23:11:49.071763  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:49.072206  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:49.569176  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:49.569198  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:49.569208  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:49.569214  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:49.571990  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:49.572015  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:49.572027  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:49.572036  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:49.572043  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:49 GMT
	I0108 23:11:49.572050  411209 round_trippers.go:580]     Audit-Id: 576196c6-c562-455c-b651-f407e3fee9c1
	I0108 23:11:49.572059  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:49.572067  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:49.572189  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:50.069845  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:50.069872  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:50.069881  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:50.069888  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:50.072339  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:50.072371  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:50.072378  411209 round_trippers.go:580]     Audit-Id: dc00c1f3-2c05-4f05-b705-66d58ac6d047
	I0108 23:11:50.072384  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:50.072390  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:50.072395  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:50.072401  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:50.072407  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:50 GMT
	I0108 23:11:50.072522  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:50.569021  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:50.569057  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:50.569069  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:50.569079  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:50.571813  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:50.571844  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:50.571855  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:50.571864  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:50.571873  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:50 GMT
	I0108 23:11:50.571882  411209 round_trippers.go:580]     Audit-Id: da5e7bb6-86da-421a-b8ba-bf3a51afaf85
	I0108 23:11:50.571891  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:50.571957  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:50.572132  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:51.069699  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:51.069725  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:51.069734  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:51.069740  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:51.072119  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:51.072141  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:51.072149  411209 round_trippers.go:580]     Audit-Id: 3096fe46-21de-4728-b441-e6bcad0858a8
	I0108 23:11:51.072154  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:51.072159  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:51.072164  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:51.072169  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:51.072175  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:51 GMT
	I0108 23:11:51.072372  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:51.072732  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:51.570061  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:51.570090  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:51.570103  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:51.570113  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:51.572487  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:51.572529  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:51.572545  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:51.572551  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:51 GMT
	I0108 23:11:51.572557  411209 round_trippers.go:580]     Audit-Id: 9c56bdb7-67e1-480a-b9cb-d1578403e2f6
	I0108 23:11:51.572565  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:51.572570  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:51.572576  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:51.572695  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:52.069674  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:52.069698  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:52.069707  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:52.069713  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:52.072020  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:52.072056  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:52.072067  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:52.072074  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:52 GMT
	I0108 23:11:52.072080  411209 round_trippers.go:580]     Audit-Id: fdb7a3e5-7724-4da5-b670-c428d843512c
	I0108 23:11:52.072085  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:52.072090  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:52.072095  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:52.072275  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:52.569997  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:52.570031  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:52.570040  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:52.570046  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:52.572512  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:52.572548  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:52.572559  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:52 GMT
	I0108 23:11:52.572568  411209 round_trippers.go:580]     Audit-Id: 2fff4313-50f3-449d-babc-664599bd52d7
	I0108 23:11:52.572577  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:52.572587  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:52.572598  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:52.572607  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:52.572722  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:53.069261  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:53.069289  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:53.069298  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:53.069304  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:53.071768  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:53.071791  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:53.071798  411209 round_trippers.go:580]     Audit-Id: 3c05d493-24d1-40dd-a626-357a0e56c8bc
	I0108 23:11:53.071805  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:53.071810  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:53.071815  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:53.071820  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:53.071826  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:53 GMT
	I0108 23:11:53.071971  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:53.569472  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:53.569503  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:53.569513  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:53.569522  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:53.574012  411209 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 23:11:53.574047  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:53.574057  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:53.574065  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:53.574072  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:53.574080  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:53 GMT
	I0108 23:11:53.574089  411209 round_trippers.go:580]     Audit-Id: 3cd45527-f0ec-42f0-b9fc-77c4038fdab2
	I0108 23:11:53.574097  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:53.574235  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"337","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 23:11:53.574592  411209 node_ready.go:58] node "multinode-659947" has status "Ready":"False"
	I0108 23:11:54.069866  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:54.069896  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:54.069908  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:54.069917  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:54.072389  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:54.072418  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:54.072430  411209 round_trippers.go:580]     Audit-Id: 975bcca3-fe60-415c-9a02-bcbf4cefb6db
	I0108 23:11:54.072438  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:54.072446  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:54.072455  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:54.072463  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:54.072472  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:54 GMT
	I0108 23:11:54.072738  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:11:54.073090  411209 node_ready.go:49] node "multinode-659947" has status "Ready":"True"
	I0108 23:11:54.073113  411209 node_ready.go:38] duration metric: took 31.0042733s waiting for node "multinode-659947" to be "Ready" ...
	I0108 23:11:54.073126  411209 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:11:54.073199  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 23:11:54.073211  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:54.073222  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:54.073231  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:54.076566  411209 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:11:54.076604  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:54.076615  411209 round_trippers.go:580]     Audit-Id: 615b5907-1d8f-4fd7-b6c4-643f65d6f11d
	I0108 23:11:54.076622  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:54.076629  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:54.076638  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:54.076646  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:54.076659  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:54 GMT
	I0108 23:11:54.077021  411209 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7vbqm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ab36954-d4e3-4e0c-8635-399567429001","resourceVersion":"427","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"10c73502-72ee-4bbf-af0e-2b9d1dc4670b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10c73502-72ee-4bbf-af0e-2b9d1dc4670b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I0108 23:11:54.079999  411209 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7vbqm" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:54.080074  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7vbqm
	I0108 23:11:54.080081  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:54.080088  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:54.080094  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:54.082159  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:54.082179  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:54.082188  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:54.082196  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:54.082203  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:54.082212  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:54 GMT
	I0108 23:11:54.082219  411209 round_trippers.go:580]     Audit-Id: f08263b1-342b-4157-8751-4c0a8bbd7a89
	I0108 23:11:54.082227  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:54.082339  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7vbqm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ab36954-d4e3-4e0c-8635-399567429001","resourceVersion":"427","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"10c73502-72ee-4bbf-af0e-2b9d1dc4670b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10c73502-72ee-4bbf-af0e-2b9d1dc4670b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0108 23:11:54.082744  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:54.082756  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:54.082763  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:54.082769  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:54.084557  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:11:54.084581  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:54.084591  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:54.084600  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:54.084609  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:54.084617  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:54.084622  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:54 GMT
	I0108 23:11:54.084628  411209 round_trippers.go:580]     Audit-Id: 2d4b5980-57b5-4286-8b5b-d9bc2e2da9da
	I0108 23:11:54.084742  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:11:54.580663  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7vbqm
	I0108 23:11:54.580689  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:54.580699  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:54.580705  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:54.583236  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:54.583288  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:54.583300  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:54 GMT
	I0108 23:11:54.583310  411209 round_trippers.go:580]     Audit-Id: 5c7c780f-b872-4fb2-bcd9-8e4e6f6c73ea
	I0108 23:11:54.583320  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:54.583329  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:54.583338  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:54.583344  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:54.583522  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7vbqm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ab36954-d4e3-4e0c-8635-399567429001","resourceVersion":"427","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"10c73502-72ee-4bbf-af0e-2b9d1dc4670b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10c73502-72ee-4bbf-af0e-2b9d1dc4670b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0108 23:11:54.584016  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:54.584031  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:54.584038  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:54.584044  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:54.586094  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:54.586115  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:54.586122  411209 round_trippers.go:580]     Audit-Id: c21e4144-113f-4778-9e05-ad9dbf858909
	I0108 23:11:54.586131  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:54.586139  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:54.586149  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:54.586158  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:54.586170  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:54 GMT
	I0108 23:11:54.586283  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:11:55.081168  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7vbqm
	I0108 23:11:55.081206  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.081217  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.081226  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.083824  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:55.083846  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.083854  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.083860  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.083865  411209 round_trippers.go:580]     Audit-Id: b8521c5f-891f-4fc8-bb5f-de9ad8b94cf1
	I0108 23:11:55.083870  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.083875  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.083880  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.083977  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7vbqm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ab36954-d4e3-4e0c-8635-399567429001","resourceVersion":"440","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"10c73502-72ee-4bbf-af0e-2b9d1dc4670b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10c73502-72ee-4bbf-af0e-2b9d1dc4670b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0108 23:11:55.084462  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:55.084478  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.084486  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.084493  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.086550  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:55.086572  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.086580  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.086588  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.086596  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.086604  411209 round_trippers.go:580]     Audit-Id: 33549158-1dd3-47c2-80a6-878d549b1a8d
	I0108 23:11:55.086617  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.086626  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.086735  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:11:55.087112  411209 pod_ready.go:92] pod "coredns-5dd5756b68-7vbqm" in "kube-system" namespace has status "Ready":"True"
	I0108 23:11:55.087135  411209 pod_ready.go:81] duration metric: took 1.007111778s waiting for pod "coredns-5dd5756b68-7vbqm" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:55.087147  411209 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:55.087213  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-659947
	I0108 23:11:55.087223  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.087234  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.087243  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.088996  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:11:55.089012  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.089020  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.089026  411209 round_trippers.go:580]     Audit-Id: efb397c9-67a6-43d5-9da4-56a49b0d9b68
	I0108 23:11:55.089031  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.089036  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.089041  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.089046  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.089205  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-659947","namespace":"kube-system","uid":"4a1f5448-9a96-4c2d-b974-fc8604a23e20","resourceVersion":"307","creationTimestamp":"2024-01-08T23:11:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4836d88bf0fddab354d811e82e0bcaaf","kubernetes.io/config.mirror":"4836d88bf0fddab354d811e82e0bcaaf","kubernetes.io/config.seen":"2024-01-08T23:11:09.365606083Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0108 23:11:55.089615  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:55.089634  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.089645  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.089654  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.091509  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:11:55.091533  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.091544  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.091553  411209 round_trippers.go:580]     Audit-Id: 2323dc36-4c74-47b4-8f41-fe1ad6486655
	I0108 23:11:55.091561  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.091569  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.091587  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.091608  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.091763  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:11:55.092151  411209 pod_ready.go:92] pod "etcd-multinode-659947" in "kube-system" namespace has status "Ready":"True"
	I0108 23:11:55.092173  411209 pod_ready.go:81] duration metric: took 5.017381ms waiting for pod "etcd-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:55.092190  411209 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:55.092268  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-659947
	I0108 23:11:55.092280  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.092290  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.092300  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.094193  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:11:55.094210  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.094217  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.094223  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.094228  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.094236  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.094245  411209 round_trippers.go:580]     Audit-Id: b33dd00a-fb1c-40ba-99fc-20f8868e67f7
	I0108 23:11:55.094256  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.094407  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-659947","namespace":"kube-system","uid":"4091bb80-9af3-4a3a-864e-0a13751c0708","resourceVersion":"303","creationTimestamp":"2024-01-08T23:11:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"4c65b54013af928699c1cd97dd72acc7","kubernetes.io/config.mirror":"4c65b54013af928699c1cd97dd72acc7","kubernetes.io/config.seen":"2024-01-08T23:11:09.365607797Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0108 23:11:55.094878  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:55.094894  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.094901  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.094906  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.096659  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:11:55.096675  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.096682  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.096688  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.096695  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.096703  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.096711  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.096719  411209 round_trippers.go:580]     Audit-Id: 410d6a91-34be-40d2-b842-cc73aed2979c
	I0108 23:11:55.096894  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:11:55.097198  411209 pod_ready.go:92] pod "kube-apiserver-multinode-659947" in "kube-system" namespace has status "Ready":"True"
	I0108 23:11:55.097215  411209 pod_ready.go:81] duration metric: took 5.013584ms waiting for pod "kube-apiserver-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:55.097225  411209 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:55.097278  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-659947
	I0108 23:11:55.097286  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.097292  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.097299  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.099295  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:11:55.099322  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.099329  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.099334  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.099341  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.099350  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.099357  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.099371  411209 round_trippers.go:580]     Audit-Id: 80e98a06-a7c3-4c1f-afa2-723395c81a68
	I0108 23:11:55.099514  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-659947","namespace":"kube-system","uid":"99044a00-503b-4f39-aec8-d541a5d88b61","resourceVersion":"340","creationTimestamp":"2024-01-08T23:11:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"35d04abfdea411f0288eb18c4ccfb806","kubernetes.io/config.mirror":"35d04abfdea411f0288eb18c4ccfb806","kubernetes.io/config.seen":"2024-01-08T23:11:09.365600379Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0108 23:11:55.099958  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:55.099972  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.099979  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.099987  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.101859  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:11:55.101878  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.101887  411209 round_trippers.go:580]     Audit-Id: 00268984-72a2-4e9c-b066-919975a5d23a
	I0108 23:11:55.101896  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.101903  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.101909  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.101917  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.101925  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.102031  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:11:55.102326  411209 pod_ready.go:92] pod "kube-controller-manager-multinode-659947" in "kube-system" namespace has status "Ready":"True"
	I0108 23:11:55.102345  411209 pod_ready.go:81] duration metric: took 5.112807ms waiting for pod "kube-controller-manager-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:55.102360  411209 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rf4sd" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:55.102416  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4sd
	I0108 23:11:55.102427  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.102437  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.102446  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.104346  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:11:55.104364  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.104371  411209 round_trippers.go:580]     Audit-Id: 1449ad3a-0294-4e53-a8c7-f44a0e6d116a
	I0108 23:11:55.104376  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.104381  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.104386  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.104392  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.104398  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.104578  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rf4sd","generateName":"kube-proxy-","namespace":"kube-system","uid":"c616c195-de73-4c48-8660-a6d67916d665","resourceVersion":"409","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d8964d22-9761-414a-9f1a-850b5da0c86f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8964d22-9761-414a-9f1a-850b5da0c86f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0108 23:11:55.270360  411209 request.go:629] Waited for 165.349806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:55.270436  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:55.270441  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.270449  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.270458  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.272796  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:55.272816  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.272826  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.272835  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.272842  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.272849  411209 round_trippers.go:580]     Audit-Id: a67545dd-00bc-4566-b16c-e17a680f97d7
	I0108 23:11:55.272857  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.272864  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.272991  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:11:55.273314  411209 pod_ready.go:92] pod "kube-proxy-rf4sd" in "kube-system" namespace has status "Ready":"True"
	I0108 23:11:55.273334  411209 pod_ready.go:81] duration metric: took 170.966489ms waiting for pod "kube-proxy-rf4sd" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:55.273343  411209 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:55.470803  411209 request.go:629] Waited for 197.38414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659947
	I0108 23:11:55.470891  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659947
	I0108 23:11:55.470898  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.470907  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.470913  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.473366  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:55.473392  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.473401  411209 round_trippers.go:580]     Audit-Id: 4237c31d-1e4a-438c-94f0-5fcc5342dea3
	I0108 23:11:55.473409  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.473416  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.473424  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.473433  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.473441  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.473573  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-659947","namespace":"kube-system","uid":"bca5adfe-3eb1-4ad1-a236-d9ce4c6db898","resourceVersion":"304","creationTimestamp":"2024-01-08T23:11:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6ccc1302a59a76635d1eec9e1e275773","kubernetes.io/config.mirror":"6ccc1302a59a76635d1eec9e1e275773","kubernetes.io/config.seen":"2024-01-08T23:11:09.365604859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0108 23:11:55.669964  411209 request.go:629] Waited for 195.963896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:55.670028  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:11:55.670033  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.670040  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.670046  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.672514  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:55.672538  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.672546  411209 round_trippers.go:580]     Audit-Id: 8d683648-260c-421c-84a5-a92f8b163d0f
	I0108 23:11:55.672552  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.672557  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.672563  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.672568  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.672578  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.672694  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:11:55.673009  411209 pod_ready.go:92] pod "kube-scheduler-multinode-659947" in "kube-system" namespace has status "Ready":"True"
	I0108 23:11:55.673027  411209 pod_ready.go:81] duration metric: took 399.678873ms waiting for pod "kube-scheduler-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:11:55.673038  411209 pod_ready.go:38] duration metric: took 1.599896647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:11:55.673055  411209 api_server.go:52] waiting for apiserver process to appear ...
	I0108 23:11:55.673106  411209 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:11:55.684218  411209 command_runner.go:130] > 1450
	I0108 23:11:55.684280  411209 api_server.go:72] duration metric: took 33.3312253s to wait for apiserver process to appear ...
	I0108 23:11:55.684295  411209 api_server.go:88] waiting for apiserver healthz status ...
	I0108 23:11:55.684322  411209 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 23:11:55.690907  411209 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 23:11:55.691007  411209 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0108 23:11:55.691019  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.691028  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.691034  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.692008  411209 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0108 23:11:55.692031  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.692042  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.692051  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.692060  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.692069  411209 round_trippers.go:580]     Content-Length: 264
	I0108 23:11:55.692091  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.692099  411209 round_trippers.go:580]     Audit-Id: 98d9ac99-9f05-4612-a16b-7c455c8f7ca3
	I0108 23:11:55.692108  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.692139  411209 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 23:11:55.692244  411209 api_server.go:141] control plane version: v1.28.4
	I0108 23:11:55.692268  411209 api_server.go:131] duration metric: took 7.9641ms to wait for apiserver health ...
	I0108 23:11:55.692278  411209 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 23:11:55.870719  411209 request.go:629] Waited for 178.361171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 23:11:55.870803  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 23:11:55.870809  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:55.870817  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:55.870829  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:55.874344  411209 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:11:55.874377  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:55.874387  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:55.874396  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:55 GMT
	I0108 23:11:55.874406  411209 round_trippers.go:580]     Audit-Id: 211b8fa8-074b-411c-a416-7dbecf1235fc
	I0108 23:11:55.874414  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:55.874423  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:55.874432  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:55.874918  411209 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7vbqm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ab36954-d4e3-4e0c-8635-399567429001","resourceVersion":"440","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"10c73502-72ee-4bbf-af0e-2b9d1dc4670b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10c73502-72ee-4bbf-af0e-2b9d1dc4670b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0108 23:11:55.876688  411209 system_pods.go:59] 8 kube-system pods found
	I0108 23:11:55.876716  411209 system_pods.go:61] "coredns-5dd5756b68-7vbqm" [5ab36954-d4e3-4e0c-8635-399567429001] Running
	I0108 23:11:55.876721  411209 system_pods.go:61] "etcd-multinode-659947" [4a1f5448-9a96-4c2d-b974-fc8604a23e20] Running
	I0108 23:11:55.876725  411209 system_pods.go:61] "kindnet-n2q2v" [1abbdfe4-e966-4c67-bcb8-431c9f4402e3] Running
	I0108 23:11:55.876729  411209 system_pods.go:61] "kube-apiserver-multinode-659947" [4091bb80-9af3-4a3a-864e-0a13751c0708] Running
	I0108 23:11:55.876734  411209 system_pods.go:61] "kube-controller-manager-multinode-659947" [99044a00-503b-4f39-aec8-d541a5d88b61] Running
	I0108 23:11:55.876737  411209 system_pods.go:61] "kube-proxy-rf4sd" [c616c195-de73-4c48-8660-a6d67916d665] Running
	I0108 23:11:55.876741  411209 system_pods.go:61] "kube-scheduler-multinode-659947" [bca5adfe-3eb1-4ad1-a236-d9ce4c6db898] Running
	I0108 23:11:55.876745  411209 system_pods.go:61] "storage-provisioner" [812cadd0-ea9b-4733-80f2-235d4f66e583] Running
	I0108 23:11:55.876750  411209 system_pods.go:74] duration metric: took 184.464868ms to wait for pod list to return data ...
	I0108 23:11:55.876758  411209 default_sa.go:34] waiting for default service account to be created ...
	I0108 23:11:56.070195  411209 request.go:629] Waited for 193.351918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 23:11:56.070279  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 23:11:56.070289  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:56.070297  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:56.070304  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:56.072767  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:56.072788  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:56.072796  411209 round_trippers.go:580]     Audit-Id: 8413101f-8c28-4419-bf58-07747404c693
	I0108 23:11:56.072801  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:56.072807  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:56.072824  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:56.072829  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:56.072834  411209 round_trippers.go:580]     Content-Length: 261
	I0108 23:11:56.072839  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:56 GMT
	I0108 23:11:56.072864  411209 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4a26e1b3-19c6-44bd-a6c8-bb09bfa14c9f","resourceVersion":"333","creationTimestamp":"2024-01-08T23:11:21Z"}}]}
	I0108 23:11:56.073057  411209 default_sa.go:45] found service account: "default"
	I0108 23:11:56.073075  411209 default_sa.go:55] duration metric: took 196.311403ms for default service account to be created ...
	I0108 23:11:56.073084  411209 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 23:11:56.270542  411209 request.go:629] Waited for 197.3883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 23:11:56.270618  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 23:11:56.270623  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:56.270632  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:56.270645  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:56.274040  411209 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:11:56.274072  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:56.274084  411209 round_trippers.go:580]     Audit-Id: 1c5e97be-9187-46cd-8286-cf7d708f21c9
	I0108 23:11:56.274092  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:56.274101  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:56.274109  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:56.274118  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:56.274128  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:56 GMT
	I0108 23:11:56.274617  411209 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7vbqm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ab36954-d4e3-4e0c-8635-399567429001","resourceVersion":"440","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"10c73502-72ee-4bbf-af0e-2b9d1dc4670b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10c73502-72ee-4bbf-af0e-2b9d1dc4670b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0108 23:11:56.276350  411209 system_pods.go:86] 8 kube-system pods found
	I0108 23:11:56.276377  411209 system_pods.go:89] "coredns-5dd5756b68-7vbqm" [5ab36954-d4e3-4e0c-8635-399567429001] Running
	I0108 23:11:56.276385  411209 system_pods.go:89] "etcd-multinode-659947" [4a1f5448-9a96-4c2d-b974-fc8604a23e20] Running
	I0108 23:11:56.276392  411209 system_pods.go:89] "kindnet-n2q2v" [1abbdfe4-e966-4c67-bcb8-431c9f4402e3] Running
	I0108 23:11:56.276397  411209 system_pods.go:89] "kube-apiserver-multinode-659947" [4091bb80-9af3-4a3a-864e-0a13751c0708] Running
	I0108 23:11:56.276405  411209 system_pods.go:89] "kube-controller-manager-multinode-659947" [99044a00-503b-4f39-aec8-d541a5d88b61] Running
	I0108 23:11:56.276410  411209 system_pods.go:89] "kube-proxy-rf4sd" [c616c195-de73-4c48-8660-a6d67916d665] Running
	I0108 23:11:56.276417  411209 system_pods.go:89] "kube-scheduler-multinode-659947" [bca5adfe-3eb1-4ad1-a236-d9ce4c6db898] Running
	I0108 23:11:56.276424  411209 system_pods.go:89] "storage-provisioner" [812cadd0-ea9b-4733-80f2-235d4f66e583] Running
	I0108 23:11:56.276437  411209 system_pods.go:126] duration metric: took 203.345815ms to wait for k8s-apps to be running ...
	I0108 23:11:56.276455  411209 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 23:11:56.276520  411209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:11:56.287598  411209 system_svc.go:56] duration metric: took 11.130029ms WaitForService to wait for kubelet.
	I0108 23:11:56.287631  411209 kubeadm.go:581] duration metric: took 33.93457852s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 23:11:56.287662  411209 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:11:56.470002  411209 request.go:629] Waited for 182.253067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0108 23:11:56.470085  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0108 23:11:56.470090  411209 round_trippers.go:469] Request Headers:
	I0108 23:11:56.470098  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:11:56.470106  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:11:56.472606  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:11:56.472628  411209 round_trippers.go:577] Response Headers:
	I0108 23:11:56.472635  411209 round_trippers.go:580]     Audit-Id: cf78ac7a-c302-4f12-ace7-fdafbd43f3ff
	I0108 23:11:56.472641  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:11:56.472646  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:11:56.472651  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:11:56.472656  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:11:56.472661  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:11:56 GMT
	I0108 23:11:56.472833  411209 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0108 23:11:56.473256  411209 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 23:11:56.473280  411209 node_conditions.go:123] node cpu capacity is 8
	I0108 23:11:56.473292  411209 node_conditions.go:105] duration metric: took 185.624817ms to run NodePressure ...
	I0108 23:11:56.473306  411209 start.go:228] waiting for startup goroutines ...
	I0108 23:11:56.473317  411209 start.go:233] waiting for cluster config update ...
	I0108 23:11:56.473335  411209 start.go:242] writing updated cluster config ...
	I0108 23:11:56.476115  411209 out.go:177] 
	I0108 23:11:56.477904  411209 config.go:182] Loaded profile config "multinode-659947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:11:56.477982  411209 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/config.json ...
	I0108 23:11:56.480245  411209 out.go:177] * Starting worker node multinode-659947-m02 in cluster multinode-659947
	I0108 23:11:56.482504  411209 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 23:11:56.484283  411209 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0108 23:11:56.485949  411209 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:11:56.485985  411209 cache.go:56] Caching tarball of preloaded images
	I0108 23:11:56.486073  411209 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0108 23:11:56.486137  411209 preload.go:174] Found /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 23:11:56.486149  411209 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 23:11:56.486255  411209 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/config.json ...
	I0108 23:11:56.504579  411209 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon, skipping pull
	I0108 23:11:56.504617  411209 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in daemon, skipping load
	I0108 23:11:56.504650  411209 cache.go:194] Successfully downloaded all kic artifacts
	I0108 23:11:56.504687  411209 start.go:365] acquiring machines lock for multinode-659947-m02: {Name:mkcc2a747420c9e4577429ca73e377c3fac2b4bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:11:56.504798  411209 start.go:369] acquired machines lock for "multinode-659947-m02" in 90.05µs
	I0108 23:11:56.504823  411209 start.go:93] Provisioning new machine with config: &{Name:multinode-659947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-659947 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 23:11:56.504897  411209 start.go:125] createHost starting for "m02" (driver="docker")
	I0108 23:11:56.507605  411209 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 23:11:56.507732  411209 start.go:159] libmachine.API.Create for "multinode-659947" (driver="docker")
	I0108 23:11:56.507756  411209 client.go:168] LocalClient.Create starting
	I0108 23:11:56.507831  411209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem
	I0108 23:11:56.507870  411209 main.go:141] libmachine: Decoding PEM data...
	I0108 23:11:56.507894  411209 main.go:141] libmachine: Parsing certificate...
	I0108 23:11:56.507959  411209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem
	I0108 23:11:56.507982  411209 main.go:141] libmachine: Decoding PEM data...
	I0108 23:11:56.507991  411209 main.go:141] libmachine: Parsing certificate...
	I0108 23:11:56.508231  411209 cli_runner.go:164] Run: docker network inspect multinode-659947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 23:11:56.524901  411209 network_create.go:77] Found existing network {name:multinode-659947 subnet:0xc002cd92f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0108 23:11:56.524964  411209 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-659947-m02" container
	I0108 23:11:56.525034  411209 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 23:11:56.541446  411209 cli_runner.go:164] Run: docker volume create multinode-659947-m02 --label name.minikube.sigs.k8s.io=multinode-659947-m02 --label created_by.minikube.sigs.k8s.io=true
	I0108 23:11:56.559127  411209 oci.go:103] Successfully created a docker volume multinode-659947-m02
	I0108 23:11:56.559226  411209 cli_runner.go:164] Run: docker run --rm --name multinode-659947-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-659947-m02 --entrypoint /usr/bin/test -v multinode-659947-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib
	I0108 23:11:57.074882  411209 oci.go:107] Successfully prepared a docker volume multinode-659947-m02
	I0108 23:11:57.074917  411209 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:11:57.074940  411209 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 23:11:57.074995  411209 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-659947-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 23:12:02.243476  411209 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-659947-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir: (5.168416964s)
	I0108 23:12:02.243510  411209 kic.go:203] duration metric: took 5.168568 seconds to extract preloaded images to volume
	W0108 23:12:02.243635  411209 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 23:12:02.243718  411209 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 23:12:02.296529  411209 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-659947-m02 --name multinode-659947-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-659947-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-659947-m02 --network multinode-659947 --ip 192.168.58.3 --volume multinode-659947-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0108 23:12:02.595870  411209 cli_runner.go:164] Run: docker container inspect multinode-659947-m02 --format={{.State.Running}}
	I0108 23:12:02.613639  411209 cli_runner.go:164] Run: docker container inspect multinode-659947-m02 --format={{.State.Status}}
	I0108 23:12:02.632428  411209 cli_runner.go:164] Run: docker exec multinode-659947-m02 stat /var/lib/dpkg/alternatives/iptables
	I0108 23:12:02.675065  411209 oci.go:144] the created container "multinode-659947-m02" has a running status.
	I0108 23:12:02.675100  411209 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947-m02/id_rsa...
	I0108 23:12:02.785973  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 23:12:02.786021  411209 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 23:12:02.806373  411209 cli_runner.go:164] Run: docker container inspect multinode-659947-m02 --format={{.State.Status}}
	I0108 23:12:02.824365  411209 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 23:12:02.824386  411209 kic_runner.go:114] Args: [docker exec --privileged multinode-659947-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 23:12:02.897279  411209 cli_runner.go:164] Run: docker container inspect multinode-659947-m02 --format={{.State.Status}}
	I0108 23:12:02.913971  411209 machine.go:88] provisioning docker machine ...
	I0108 23:12:02.914024  411209 ubuntu.go:169] provisioning hostname "multinode-659947-m02"
	I0108 23:12:02.914075  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947-m02
	I0108 23:12:02.931586  411209 main.go:141] libmachine: Using SSH client type: native
	I0108 23:12:02.931953  411209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I0108 23:12:02.931971  411209 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-659947-m02 && echo "multinode-659947-m02" | sudo tee /etc/hostname
	I0108 23:12:02.932804  411209 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44972->127.0.0.1:33154: read: connection reset by peer
	I0108 23:12:06.078891  411209 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-659947-m02
	
	I0108 23:12:06.079028  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947-m02
	I0108 23:12:06.095562  411209 main.go:141] libmachine: Using SSH client type: native
	I0108 23:12:06.095922  411209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I0108 23:12:06.095942  411209 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-659947-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-659947-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-659947-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:12:06.231395  411209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:12:06.231426  411209 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-321683/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-321683/.minikube}
	I0108 23:12:06.231448  411209 ubuntu.go:177] setting up certificates
	I0108 23:12:06.231459  411209 provision.go:83] configureAuth start
	I0108 23:12:06.231608  411209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-659947-m02
	I0108 23:12:06.248071  411209 provision.go:138] copyHostCerts
	I0108 23:12:06.248137  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem
	I0108 23:12:06.248176  411209 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem, removing ...
	I0108 23:12:06.248187  411209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem
	I0108 23:12:06.248273  411209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem (1082 bytes)
	I0108 23:12:06.248364  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem
	I0108 23:12:06.248389  411209 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem, removing ...
	I0108 23:12:06.248401  411209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem
	I0108 23:12:06.248436  411209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem (1123 bytes)
	I0108 23:12:06.248519  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem
	I0108 23:12:06.248543  411209 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem, removing ...
	I0108 23:12:06.248550  411209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem
	I0108 23:12:06.248583  411209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem (1679 bytes)
	I0108 23:12:06.248641  411209 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem org=jenkins.multinode-659947-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-659947-m02]
	I0108 23:12:06.410051  411209 provision.go:172] copyRemoteCerts
	I0108 23:12:06.410112  411209 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:12:06.410158  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947-m02
	I0108 23:12:06.427239  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947-m02/id_rsa Username:docker}
	I0108 23:12:06.523894  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 23:12:06.523953  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:12:06.546801  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 23:12:06.546861  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 23:12:06.568532  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 23:12:06.568597  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:12:06.590349  411209 provision.go:86] duration metric: configureAuth took 358.854333ms
	I0108 23:12:06.590381  411209 ubuntu.go:193] setting minikube options for container-runtime
	I0108 23:12:06.590586  411209 config.go:182] Loaded profile config "multinode-659947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:12:06.590725  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947-m02
	I0108 23:12:06.607153  411209 main.go:141] libmachine: Using SSH client type: native
	I0108 23:12:06.607643  411209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I0108 23:12:06.607666  411209 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:12:06.828781  411209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:12:06.828809  411209 machine.go:91] provisioned docker machine in 3.914810065s
	I0108 23:12:06.828821  411209 client.go:171] LocalClient.Create took 10.321058953s
	I0108 23:12:06.828844  411209 start.go:167] duration metric: libmachine.API.Create for "multinode-659947" took 10.321111951s
	I0108 23:12:06.828854  411209 start.go:300] post-start starting for "multinode-659947-m02" (driver="docker")
	I0108 23:12:06.828868  411209 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:12:06.828931  411209 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:12:06.828978  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947-m02
	I0108 23:12:06.846106  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947-m02/id_rsa Username:docker}
	I0108 23:12:06.944325  411209 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:12:06.947525  411209 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0108 23:12:06.947548  411209 command_runner.go:130] > NAME="Ubuntu"
	I0108 23:12:06.947557  411209 command_runner.go:130] > VERSION_ID="22.04"
	I0108 23:12:06.947566  411209 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0108 23:12:06.947575  411209 command_runner.go:130] > VERSION_CODENAME=jammy
	I0108 23:12:06.947582  411209 command_runner.go:130] > ID=ubuntu
	I0108 23:12:06.947590  411209 command_runner.go:130] > ID_LIKE=debian
	I0108 23:12:06.947601  411209 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 23:12:06.947607  411209 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 23:12:06.947614  411209 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 23:12:06.947623  411209 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 23:12:06.947630  411209 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0108 23:12:06.947695  411209 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 23:12:06.947718  411209 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 23:12:06.947729  411209 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 23:12:06.947738  411209 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 23:12:06.947752  411209 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/addons for local assets ...
	I0108 23:12:06.947804  411209 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/files for local assets ...
	I0108 23:12:06.947879  411209 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem -> 3283842.pem in /etc/ssl/certs
	I0108 23:12:06.947892  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem -> /etc/ssl/certs/3283842.pem
	I0108 23:12:06.948020  411209 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:12:06.956036  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem --> /etc/ssl/certs/3283842.pem (1708 bytes)
	I0108 23:12:06.977685  411209 start.go:303] post-start completed in 148.812925ms
	I0108 23:12:06.978092  411209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-659947-m02
	I0108 23:12:06.994987  411209 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/config.json ...
	I0108 23:12:06.995309  411209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:12:06.995367  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947-m02
	I0108 23:12:07.011763  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947-m02/id_rsa Username:docker}
	I0108 23:12:07.103834  411209 command_runner.go:130] > 24%!
	(MISSING)I0108 23:12:07.104133  411209 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 23:12:07.108456  411209 command_runner.go:130] > 222G
	I0108 23:12:07.108497  411209 start.go:128] duration metric: createHost completed in 10.603587462s
	I0108 23:12:07.108507  411209 start.go:83] releasing machines lock for "multinode-659947-m02", held for 10.603699185s
	I0108 23:12:07.108569  411209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-659947-m02
	I0108 23:12:07.127137  411209 out.go:177] * Found network options:
	I0108 23:12:07.128877  411209 out.go:177]   - NO_PROXY=192.168.58.2
	W0108 23:12:07.130212  411209 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 23:12:07.130245  411209 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 23:12:07.130315  411209 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:12:07.130389  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947-m02
	I0108 23:12:07.130394  411209 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:12:07.130485  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947-m02
	I0108 23:12:07.147126  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947-m02/id_rsa Username:docker}
	I0108 23:12:07.147503  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947-m02/id_rsa Username:docker}
	I0108 23:12:07.374439  411209 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:12:07.374466  411209 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 23:12:07.378913  411209 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0108 23:12:07.378944  411209 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0108 23:12:07.378954  411209 command_runner.go:130] > Device: b0h/176d	Inode: 1044333     Links: 1
	I0108 23:12:07.378962  411209 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:12:07.378968  411209 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0108 23:12:07.378973  411209 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0108 23:12:07.378978  411209 command_runner.go:130] > Change: 2024-01-08 22:52:04.683201688 +0000
	I0108 23:12:07.378986  411209 command_runner.go:130] >  Birth: 2024-01-08 22:52:04.683201688 +0000
	I0108 23:12:07.379062  411209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:12:07.396969  411209 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 23:12:07.397053  411209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:12:07.424308  411209 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0108 23:12:07.424358  411209 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 23:12:07.424367  411209 start.go:475] detecting cgroup driver to use...
	I0108 23:12:07.424404  411209 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 23:12:07.424471  411209 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:12:07.438819  411209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:12:07.449222  411209 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:12:07.449278  411209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:12:07.462364  411209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:12:07.475311  411209 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 23:12:07.548630  411209 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:12:07.561865  411209 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 23:12:07.624827  411209 docker.go:219] disabling docker service ...
	I0108 23:12:07.624910  411209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:12:07.643174  411209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:12:07.653941  411209 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:12:07.733234  411209 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 23:12:07.733321  411209 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:12:07.812001  411209 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 23:12:07.812101  411209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:12:07.823276  411209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:12:07.838773  411209 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 23:12:07.838827  411209 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 23:12:07.838883  411209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:12:07.848643  411209 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 23:12:07.848712  411209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:12:07.858171  411209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:12:07.867687  411209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:12:07.877283  411209 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 23:12:07.886320  411209 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 23:12:07.894196  411209 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 23:12:07.894265  411209 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 23:12:07.901914  411209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 23:12:07.975154  411209 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 23:12:08.080736  411209 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 23:12:08.080812  411209 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 23:12:08.084483  411209 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 23:12:08.084509  411209 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 23:12:08.084520  411209 command_runner.go:130] > Device: b9h/185d	Inode: 186         Links: 1
	I0108 23:12:08.084532  411209 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:12:08.084543  411209 command_runner.go:130] > Access: 2024-01-08 23:12:08.066234935 +0000
	I0108 23:12:08.084550  411209 command_runner.go:130] > Modify: 2024-01-08 23:12:08.066234935 +0000
	I0108 23:12:08.084555  411209 command_runner.go:130] > Change: 2024-01-08 23:12:08.066234935 +0000
	I0108 23:12:08.084559  411209 command_runner.go:130] >  Birth: -
	I0108 23:12:08.084584  411209 start.go:543] Will wait 60s for crictl version
	I0108 23:12:08.084633  411209 ssh_runner.go:195] Run: which crictl
	I0108 23:12:08.087740  411209 command_runner.go:130] > /usr/bin/crictl
	I0108 23:12:08.087802  411209 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 23:12:08.118956  411209 command_runner.go:130] > Version:  0.1.0
	I0108 23:12:08.118983  411209 command_runner.go:130] > RuntimeName:  cri-o
	I0108 23:12:08.118991  411209 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0108 23:12:08.119000  411209 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 23:12:08.121250  411209 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 23:12:08.121335  411209 ssh_runner.go:195] Run: crio --version
	I0108 23:12:08.156804  411209 command_runner.go:130] > crio version 1.24.6
	I0108 23:12:08.156829  411209 command_runner.go:130] > Version:          1.24.6
	I0108 23:12:08.156835  411209 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 23:12:08.156840  411209 command_runner.go:130] > GitTreeState:     clean
	I0108 23:12:08.156846  411209 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 23:12:08.156850  411209 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 23:12:08.156854  411209 command_runner.go:130] > Compiler:         gc
	I0108 23:12:08.156858  411209 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:12:08.156863  411209 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:12:08.156870  411209 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:12:08.156874  411209 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:12:08.156879  411209 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:12:08.156942  411209 ssh_runner.go:195] Run: crio --version
	I0108 23:12:08.190024  411209 command_runner.go:130] > crio version 1.24.6
	I0108 23:12:08.190052  411209 command_runner.go:130] > Version:          1.24.6
	I0108 23:12:08.190059  411209 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 23:12:08.190064  411209 command_runner.go:130] > GitTreeState:     clean
	I0108 23:12:08.190069  411209 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 23:12:08.190073  411209 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 23:12:08.190078  411209 command_runner.go:130] > Compiler:         gc
	I0108 23:12:08.190082  411209 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:12:08.190091  411209 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:12:08.190098  411209 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:12:08.190102  411209 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:12:08.190106  411209 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:12:08.192045  411209 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 23:12:08.193463  411209 out.go:177]   - env NO_PROXY=192.168.58.2
	I0108 23:12:08.194981  411209 cli_runner.go:164] Run: docker network inspect multinode-659947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 23:12:08.211666  411209 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0108 23:12:08.215516  411209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:12:08.225577  411209 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947 for IP: 192.168.58.3
	I0108 23:12:08.225611  411209 certs.go:190] acquiring lock for shared ca certs: {Name:mka0fb25b2b3d7c6ea0a3bf3a5eb1e0289391c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:12:08.225765  411209 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key
	I0108 23:12:08.225819  411209 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key
	I0108 23:12:08.225835  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 23:12:08.225851  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 23:12:08.225863  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 23:12:08.225879  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 23:12:08.225940  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384.pem (1338 bytes)
	W0108 23:12:08.225985  411209 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384_empty.pem, impossibly tiny 0 bytes
	I0108 23:12:08.226001  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 23:12:08.226035  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem (1082 bytes)
	I0108 23:12:08.226069  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem (1123 bytes)
	I0108 23:12:08.226103  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem (1679 bytes)
	I0108 23:12:08.226265  411209 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem (1708 bytes)
	I0108 23:12:08.226321  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384.pem -> /usr/share/ca-certificates/328384.pem
	I0108 23:12:08.226343  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem -> /usr/share/ca-certificates/3283842.pem
	I0108 23:12:08.226361  411209 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:12:08.226777  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 23:12:08.248427  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 23:12:08.269563  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 23:12:08.290636  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 23:12:08.311385  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/certs/328384.pem --> /usr/share/ca-certificates/328384.pem (1338 bytes)
	I0108 23:12:08.332688  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem --> /usr/share/ca-certificates/3283842.pem (1708 bytes)
	I0108 23:12:08.354070  411209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 23:12:08.376586  411209 ssh_runner.go:195] Run: openssl version
	I0108 23:12:08.381482  411209 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0108 23:12:08.381764  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3283842.pem && ln -fs /usr/share/ca-certificates/3283842.pem /etc/ssl/certs/3283842.pem"
	I0108 23:12:08.390404  411209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3283842.pem
	I0108 23:12:08.393522  411209 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 22:58 /usr/share/ca-certificates/3283842.pem
	I0108 23:12:08.393557  411209 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 22:58 /usr/share/ca-certificates/3283842.pem
	I0108 23:12:08.393611  411209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3283842.pem
	I0108 23:12:08.399370  411209 command_runner.go:130] > 3ec20f2e
	I0108 23:12:08.399650  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3283842.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 23:12:08.407984  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 23:12:08.416211  411209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:12:08.419515  411209 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:12:08.419547  411209 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:12:08.419585  411209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:12:08.425657  411209 command_runner.go:130] > b5213941
	I0108 23:12:08.425726  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 23:12:08.434847  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/328384.pem && ln -fs /usr/share/ca-certificates/328384.pem /etc/ssl/certs/328384.pem"
	I0108 23:12:08.443465  411209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/328384.pem
	I0108 23:12:08.446608  411209 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 22:58 /usr/share/ca-certificates/328384.pem
	I0108 23:12:08.446648  411209 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 22:58 /usr/share/ca-certificates/328384.pem
	I0108 23:12:08.446683  411209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/328384.pem
	I0108 23:12:08.452747  411209 command_runner.go:130] > 51391683
	I0108 23:12:08.453025  411209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/328384.pem /etc/ssl/certs/51391683.0"
	I0108 23:12:08.462228  411209 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 23:12:08.465583  411209 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:12:08.465633  411209 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:12:08.465720  411209 ssh_runner.go:195] Run: crio config
	I0108 23:12:08.501461  411209 command_runner.go:130] ! time="2024-01-08 23:12:08.501014148Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0108 23:12:08.501499  411209 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 23:12:08.506066  411209 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 23:12:08.506096  411209 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 23:12:08.506107  411209 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 23:12:08.506120  411209 command_runner.go:130] > #
	I0108 23:12:08.506138  411209 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 23:12:08.506151  411209 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 23:12:08.506165  411209 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 23:12:08.506177  411209 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 23:12:08.506183  411209 command_runner.go:130] > # reload'.
	I0108 23:12:08.506189  411209 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 23:12:08.506198  411209 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 23:12:08.506207  411209 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 23:12:08.506215  411209 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 23:12:08.506221  411209 command_runner.go:130] > [crio]
	I0108 23:12:08.506227  411209 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 23:12:08.506232  411209 command_runner.go:130] > # containers images, in this directory.
	I0108 23:12:08.506246  411209 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0108 23:12:08.506256  411209 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 23:12:08.506264  411209 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0108 23:12:08.506273  411209 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 23:12:08.506282  411209 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 23:12:08.506291  411209 command_runner.go:130] > # storage_driver = "vfs"
	I0108 23:12:08.506300  411209 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 23:12:08.506312  411209 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 23:12:08.506316  411209 command_runner.go:130] > # storage_option = [
	I0108 23:12:08.506322  411209 command_runner.go:130] > # ]
	I0108 23:12:08.506329  411209 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 23:12:08.506338  411209 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 23:12:08.506345  411209 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 23:12:08.506351  411209 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 23:12:08.506359  411209 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 23:12:08.506366  411209 command_runner.go:130] > # always happen on a node reboot
	I0108 23:12:08.506371  411209 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 23:12:08.506380  411209 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 23:12:08.506388  411209 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 23:12:08.506407  411209 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 23:12:08.506421  411209 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 23:12:08.506429  411209 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 23:12:08.506436  411209 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 23:12:08.506451  411209 command_runner.go:130] > # internal_wipe = true
	I0108 23:12:08.506461  411209 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 23:12:08.506471  411209 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 23:12:08.506479  411209 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 23:12:08.506487  411209 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 23:12:08.506495  411209 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 23:12:08.506505  411209 command_runner.go:130] > [crio.api]
	I0108 23:12:08.506513  411209 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 23:12:08.506520  411209 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 23:12:08.506526  411209 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 23:12:08.506533  411209 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 23:12:08.506540  411209 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 23:12:08.506547  411209 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 23:12:08.506555  411209 command_runner.go:130] > # stream_port = "0"
	I0108 23:12:08.506563  411209 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 23:12:08.506570  411209 command_runner.go:130] > # stream_enable_tls = false
	I0108 23:12:08.506579  411209 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 23:12:08.506586  411209 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 23:12:08.506595  411209 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 23:12:08.506604  411209 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 23:12:08.506610  411209 command_runner.go:130] > # minutes.
	I0108 23:12:08.506615  411209 command_runner.go:130] > # stream_tls_cert = ""
	I0108 23:12:08.506623  411209 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 23:12:08.506630  411209 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 23:12:08.506636  411209 command_runner.go:130] > # stream_tls_key = ""
	I0108 23:12:08.506644  411209 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 23:12:08.506653  411209 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 23:12:08.506660  411209 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 23:12:08.506664  411209 command_runner.go:130] > # stream_tls_ca = ""
	I0108 23:12:08.506674  411209 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:12:08.506687  411209 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0108 23:12:08.506698  411209 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:12:08.506705  411209 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0108 23:12:08.506727  411209 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 23:12:08.506736  411209 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 23:12:08.506740  411209 command_runner.go:130] > [crio.runtime]
	I0108 23:12:08.506752  411209 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 23:12:08.506761  411209 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 23:12:08.506768  411209 command_runner.go:130] > # "nofile=1024:2048"
	I0108 23:12:08.506774  411209 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 23:12:08.506780  411209 command_runner.go:130] > # default_ulimits = [
	I0108 23:12:08.506784  411209 command_runner.go:130] > # ]
	I0108 23:12:08.506792  411209 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 23:12:08.506799  411209 command_runner.go:130] > # no_pivot = false
	I0108 23:12:08.506804  411209 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 23:12:08.506813  411209 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 23:12:08.506820  411209 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 23:12:08.506826  411209 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 23:12:08.506833  411209 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 23:12:08.506840  411209 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:12:08.506846  411209 command_runner.go:130] > # conmon = ""
	I0108 23:12:08.506851  411209 command_runner.go:130] > # Cgroup setting for conmon
	I0108 23:12:08.506860  411209 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 23:12:08.506867  411209 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 23:12:08.506876  411209 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 23:12:08.506887  411209 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 23:12:08.506896  411209 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:12:08.506903  411209 command_runner.go:130] > # conmon_env = [
	I0108 23:12:08.506906  411209 command_runner.go:130] > # ]
	I0108 23:12:08.506912  411209 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 23:12:08.506919  411209 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 23:12:08.506925  411209 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 23:12:08.506931  411209 command_runner.go:130] > # default_env = [
	I0108 23:12:08.506934  411209 command_runner.go:130] > # ]
	I0108 23:12:08.506947  411209 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 23:12:08.506954  411209 command_runner.go:130] > # selinux = false
	I0108 23:12:08.506960  411209 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 23:12:08.506969  411209 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 23:12:08.506977  411209 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 23:12:08.506983  411209 command_runner.go:130] > # seccomp_profile = ""
	I0108 23:12:08.506990  411209 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 23:12:08.507000  411209 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 23:12:08.507013  411209 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 23:12:08.507022  411209 command_runner.go:130] > # which might increase security.
	I0108 23:12:08.507029  411209 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0108 23:12:08.507035  411209 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 23:12:08.507043  411209 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 23:12:08.507052  411209 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 23:12:08.507060  411209 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 23:12:08.507068  411209 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:12:08.507073  411209 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 23:12:08.507080  411209 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 23:12:08.507087  411209 command_runner.go:130] > # the cgroup blockio controller.
	I0108 23:12:08.507092  411209 command_runner.go:130] > # blockio_config_file = ""
	I0108 23:12:08.507100  411209 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 23:12:08.507107  411209 command_runner.go:130] > # irqbalance daemon.
	I0108 23:12:08.507112  411209 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 23:12:08.507121  411209 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 23:12:08.507128  411209 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:12:08.507133  411209 command_runner.go:130] > # rdt_config_file = ""
	I0108 23:12:08.507143  411209 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 23:12:08.507150  411209 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 23:12:08.507156  411209 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 23:12:08.507163  411209 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 23:12:08.507169  411209 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 23:12:08.507178  411209 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 23:12:08.507182  411209 command_runner.go:130] > # will be added.
	I0108 23:12:08.507189  411209 command_runner.go:130] > # default_capabilities = [
	I0108 23:12:08.507193  411209 command_runner.go:130] > # 	"CHOWN",
	I0108 23:12:08.507199  411209 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 23:12:08.507203  411209 command_runner.go:130] > # 	"FSETID",
	I0108 23:12:08.507209  411209 command_runner.go:130] > # 	"FOWNER",
	I0108 23:12:08.507213  411209 command_runner.go:130] > # 	"SETGID",
	I0108 23:12:08.507220  411209 command_runner.go:130] > # 	"SETUID",
	I0108 23:12:08.507224  411209 command_runner.go:130] > # 	"SETPCAP",
	I0108 23:12:08.507231  411209 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 23:12:08.507235  411209 command_runner.go:130] > # 	"KILL",
	I0108 23:12:08.507240  411209 command_runner.go:130] > # ]
	I0108 23:12:08.507252  411209 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0108 23:12:08.507298  411209 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0108 23:12:08.507309  411209 command_runner.go:130] > # add_inheritable_capabilities = true
	I0108 23:12:08.507315  411209 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 23:12:08.507323  411209 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:12:08.507335  411209 command_runner.go:130] > # default_sysctls = [
	I0108 23:12:08.507342  411209 command_runner.go:130] > # ]
	I0108 23:12:08.507347  411209 command_runner.go:130] > # List of devices on the host that a
	I0108 23:12:08.507355  411209 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 23:12:08.507362  411209 command_runner.go:130] > # allowed_devices = [
	I0108 23:12:08.507366  411209 command_runner.go:130] > # 	"/dev/fuse",
	I0108 23:12:08.507372  411209 command_runner.go:130] > # ]
	I0108 23:12:08.507377  411209 command_runner.go:130] > # List of additional devices. specified as
	I0108 23:12:08.507411  411209 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 23:12:08.507422  411209 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 23:12:08.507428  411209 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:12:08.507432  411209 command_runner.go:130] > # additional_devices = [
	I0108 23:12:08.507436  411209 command_runner.go:130] > # ]
	I0108 23:12:08.507448  411209 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 23:12:08.507456  411209 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 23:12:08.507460  411209 command_runner.go:130] > # 	"/etc/cdi",
	I0108 23:12:08.507467  411209 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 23:12:08.507470  411209 command_runner.go:130] > # ]
	I0108 23:12:08.507479  411209 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 23:12:08.507487  411209 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 23:12:08.507494  411209 command_runner.go:130] > # Defaults to false.
	I0108 23:12:08.507499  411209 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 23:12:08.507507  411209 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 23:12:08.507515  411209 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 23:12:08.507520  411209 command_runner.go:130] > # hooks_dir = [
	I0108 23:12:08.507525  411209 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 23:12:08.507531  411209 command_runner.go:130] > # ]
	I0108 23:12:08.507537  411209 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 23:12:08.507550  411209 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 23:12:08.507559  411209 command_runner.go:130] > # its default mounts from the following two files:
	I0108 23:12:08.507562  411209 command_runner.go:130] > #
	I0108 23:12:08.507577  411209 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 23:12:08.507587  411209 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 23:12:08.507595  411209 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 23:12:08.507601  411209 command_runner.go:130] > #
	I0108 23:12:08.507607  411209 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 23:12:08.507615  411209 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 23:12:08.507624  411209 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 23:12:08.507632  411209 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 23:12:08.507638  411209 command_runner.go:130] > #
	I0108 23:12:08.507643  411209 command_runner.go:130] > # default_mounts_file = ""
	I0108 23:12:08.507648  411209 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 23:12:08.507659  411209 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 23:12:08.507666  411209 command_runner.go:130] > # pids_limit = 0
	I0108 23:12:08.507672  411209 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 23:12:08.507681  411209 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 23:12:08.507689  411209 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 23:12:08.507699  411209 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 23:12:08.507706  411209 command_runner.go:130] > # log_size_max = -1
	I0108 23:12:08.507715  411209 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 23:12:08.507723  411209 command_runner.go:130] > # log_to_journald = false
	I0108 23:12:08.507729  411209 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 23:12:08.507736  411209 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 23:12:08.507741  411209 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 23:12:08.507749  411209 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 23:12:08.507754  411209 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 23:12:08.507764  411209 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 23:12:08.507772  411209 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 23:12:08.507779  411209 command_runner.go:130] > # read_only = false
	I0108 23:12:08.507785  411209 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 23:12:08.507794  411209 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 23:12:08.507801  411209 command_runner.go:130] > # live configuration reload.
	I0108 23:12:08.507805  411209 command_runner.go:130] > # log_level = "info"
	I0108 23:12:08.507813  411209 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 23:12:08.507820  411209 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:12:08.507824  411209 command_runner.go:130] > # log_filter = ""
	I0108 23:12:08.507833  411209 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 23:12:08.507844  411209 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 23:12:08.507852  411209 command_runner.go:130] > # separated by comma.
	I0108 23:12:08.507856  411209 command_runner.go:130] > # uid_mappings = ""
	I0108 23:12:08.507864  411209 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 23:12:08.507873  411209 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 23:12:08.507879  411209 command_runner.go:130] > # separated by comma.
	I0108 23:12:08.507883  411209 command_runner.go:130] > # gid_mappings = ""
	I0108 23:12:08.507891  411209 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 23:12:08.507898  411209 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:12:08.507906  411209 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:12:08.507911  411209 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 23:12:08.507920  411209 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 23:12:08.507928  411209 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:12:08.507936  411209 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:12:08.507949  411209 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 23:12:08.507955  411209 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 23:12:08.507964  411209 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 23:12:08.507972  411209 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 23:12:08.507979  411209 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 23:12:08.507987  411209 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 23:12:08.507997  411209 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 23:12:08.508004  411209 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 23:12:08.508012  411209 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 23:12:08.508016  411209 command_runner.go:130] > # drop_infra_ctr = true
	I0108 23:12:08.508025  411209 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 23:12:08.508032  411209 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 23:12:08.508040  411209 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 23:12:08.508046  411209 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 23:12:08.508052  411209 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 23:12:08.508059  411209 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 23:12:08.508064  411209 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 23:12:08.508073  411209 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 23:12:08.508080  411209 command_runner.go:130] > # pinns_path = ""
	I0108 23:12:08.508086  411209 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 23:12:08.508094  411209 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 23:12:08.508102  411209 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 23:12:08.508117  411209 command_runner.go:130] > # default_runtime = "runc"
	I0108 23:12:08.508126  411209 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 23:12:08.508136  411209 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 23:12:08.508148  411209 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 23:12:08.508156  411209 command_runner.go:130] > # creation as a file is not desired either.
	I0108 23:12:08.508167  411209 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 23:12:08.508176  411209 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 23:12:08.508184  411209 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 23:12:08.508190  411209 command_runner.go:130] > # ]
	I0108 23:12:08.508197  411209 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 23:12:08.508209  411209 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 23:12:08.508219  411209 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 23:12:08.508228  411209 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 23:12:08.508234  411209 command_runner.go:130] > #
	I0108 23:12:08.508239  411209 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 23:12:08.508246  411209 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 23:12:08.508251  411209 command_runner.go:130] > #  runtime_type = "oci"
	I0108 23:12:08.508258  411209 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 23:12:08.508265  411209 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 23:12:08.508273  411209 command_runner.go:130] > #  allowed_annotations = []
	I0108 23:12:08.508277  411209 command_runner.go:130] > # Where:
	I0108 23:12:08.508285  411209 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 23:12:08.508293  411209 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 23:12:08.508302  411209 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 23:12:08.508311  411209 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 23:12:08.508317  411209 command_runner.go:130] > #   in $PATH.
	I0108 23:12:08.508324  411209 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 23:12:08.508331  411209 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 23:12:08.508340  411209 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 23:12:08.508346  411209 command_runner.go:130] > #   state.
	I0108 23:12:08.508353  411209 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 23:12:08.508361  411209 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 23:12:08.508369  411209 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 23:12:08.508377  411209 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 23:12:08.508385  411209 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 23:12:08.508394  411209 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 23:12:08.508404  411209 command_runner.go:130] > #   The currently recognized values are:
	I0108 23:12:08.508414  411209 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 23:12:08.508421  411209 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 23:12:08.508430  411209 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 23:12:08.508439  411209 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 23:12:08.508451  411209 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 23:12:08.508461  411209 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 23:12:08.508470  411209 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 23:12:08.508479  411209 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 23:12:08.508487  411209 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 23:12:08.508491  411209 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 23:12:08.508499  411209 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0108 23:12:08.508503  411209 command_runner.go:130] > runtime_type = "oci"
	I0108 23:12:08.508510  411209 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 23:12:08.508514  411209 command_runner.go:130] > runtime_config_path = ""
	I0108 23:12:08.508521  411209 command_runner.go:130] > monitor_path = ""
	I0108 23:12:08.508525  411209 command_runner.go:130] > monitor_cgroup = ""
	I0108 23:12:08.508534  411209 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 23:12:08.508629  411209 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 23:12:08.508644  411209 command_runner.go:130] > # running containers
	I0108 23:12:08.508651  411209 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 23:12:08.508657  411209 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 23:12:08.508667  411209 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 23:12:08.508675  411209 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 23:12:08.508681  411209 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 23:12:08.508688  411209 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 23:12:08.508693  411209 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 23:12:08.508700  411209 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 23:12:08.508705  411209 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 23:12:08.508711  411209 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 23:12:08.508718  411209 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 23:12:08.508726  411209 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 23:12:08.508734  411209 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 23:12:08.508745  411209 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 23:12:08.508752  411209 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 23:12:08.508761  411209 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 23:12:08.508775  411209 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 23:12:08.508786  411209 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 23:12:08.508795  411209 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 23:12:08.508804  411209 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 23:12:08.508811  411209 command_runner.go:130] > # Example:
	I0108 23:12:08.508816  411209 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 23:12:08.508823  411209 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 23:12:08.508828  411209 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 23:12:08.508835  411209 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 23:12:08.508839  411209 command_runner.go:130] > # cpuset = 0
	I0108 23:12:08.508846  411209 command_runner.go:130] > # cpushares = "0-1"
	I0108 23:12:08.508850  411209 command_runner.go:130] > # Where:
	I0108 23:12:08.508857  411209 command_runner.go:130] > # The workload name is workload-type.
	I0108 23:12:08.508864  411209 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 23:12:08.508871  411209 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 23:12:08.508879  411209 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 23:12:08.508889  411209 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 23:12:08.508898  411209 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 23:12:08.508907  411209 command_runner.go:130] > # 
	I0108 23:12:08.508917  411209 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 23:12:08.508923  411209 command_runner.go:130] > #
	I0108 23:12:08.508929  411209 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 23:12:08.508940  411209 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 23:12:08.508949  411209 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 23:12:08.508958  411209 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 23:12:08.508966  411209 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 23:12:08.508971  411209 command_runner.go:130] > [crio.image]
	I0108 23:12:08.508976  411209 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 23:12:08.508983  411209 command_runner.go:130] > # default_transport = "docker://"
	I0108 23:12:08.508989  411209 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 23:12:08.508998  411209 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:12:08.509005  411209 command_runner.go:130] > # global_auth_file = ""
	I0108 23:12:08.509010  411209 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 23:12:08.509018  411209 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:12:08.509026  411209 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 23:12:08.509032  411209 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 23:12:08.509044  411209 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:12:08.509053  411209 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:12:08.509058  411209 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 23:12:08.509066  411209 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 23:12:08.509075  411209 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 23:12:08.509084  411209 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 23:12:08.509092  411209 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 23:12:08.509096  411209 command_runner.go:130] > # pause_command = "/pause"
	I0108 23:12:08.509105  411209 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 23:12:08.509113  411209 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 23:12:08.509122  411209 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 23:12:08.509130  411209 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 23:12:08.509138  411209 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 23:12:08.509144  411209 command_runner.go:130] > # signature_policy = ""
	I0108 23:12:08.509154  411209 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 23:12:08.509162  411209 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 23:12:08.509169  411209 command_runner.go:130] > # changing them here.
	I0108 23:12:08.509173  411209 command_runner.go:130] > # insecure_registries = [
	I0108 23:12:08.509181  411209 command_runner.go:130] > # ]
	I0108 23:12:08.509188  411209 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 23:12:08.509196  411209 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 23:12:08.509204  411209 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 23:12:08.509209  411209 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 23:12:08.509216  411209 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 23:12:08.509222  411209 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 23:12:08.509228  411209 command_runner.go:130] > # CNI plugins.
	I0108 23:12:08.509232  411209 command_runner.go:130] > [crio.network]
	I0108 23:12:08.509240  411209 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 23:12:08.509248  411209 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 23:12:08.509255  411209 command_runner.go:130] > # cni_default_network = ""
	I0108 23:12:08.509261  411209 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 23:12:08.509268  411209 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 23:12:08.509274  411209 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 23:12:08.509280  411209 command_runner.go:130] > # plugin_dirs = [
	I0108 23:12:08.509284  411209 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 23:12:08.509290  411209 command_runner.go:130] > # ]
	I0108 23:12:08.509299  411209 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 23:12:08.509306  411209 command_runner.go:130] > [crio.metrics]
	I0108 23:12:08.509311  411209 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 23:12:08.509317  411209 command_runner.go:130] > # enable_metrics = false
	I0108 23:12:08.509322  411209 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 23:12:08.509329  411209 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 23:12:08.509335  411209 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 23:12:08.509344  411209 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 23:12:08.509352  411209 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 23:12:08.509356  411209 command_runner.go:130] > # metrics_collectors = [
	I0108 23:12:08.509359  411209 command_runner.go:130] > # 	"operations",
	I0108 23:12:08.509367  411209 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 23:12:08.509371  411209 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 23:12:08.509378  411209 command_runner.go:130] > # 	"operations_errors",
	I0108 23:12:08.509382  411209 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 23:12:08.509389  411209 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 23:12:08.509394  411209 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 23:12:08.509400  411209 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 23:12:08.509408  411209 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 23:12:08.509415  411209 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 23:12:08.509419  411209 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 23:12:08.509426  411209 command_runner.go:130] > # 	"containers_oom_total",
	I0108 23:12:08.509430  411209 command_runner.go:130] > # 	"containers_oom",
	I0108 23:12:08.509437  411209 command_runner.go:130] > # 	"processes_defunct",
	I0108 23:12:08.509446  411209 command_runner.go:130] > # 	"operations_total",
	I0108 23:12:08.509452  411209 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 23:12:08.509459  411209 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 23:12:08.509464  411209 command_runner.go:130] > # 	"operations_errors_total",
	I0108 23:12:08.509471  411209 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 23:12:08.509475  411209 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 23:12:08.509482  411209 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 23:12:08.509487  411209 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 23:12:08.509493  411209 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 23:12:08.509498  411209 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 23:12:08.509503  411209 command_runner.go:130] > # ]
	I0108 23:12:08.509509  411209 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 23:12:08.509519  411209 command_runner.go:130] > # metrics_port = 9090
	I0108 23:12:08.509531  411209 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 23:12:08.509538  411209 command_runner.go:130] > # metrics_socket = ""
	I0108 23:12:08.509543  411209 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 23:12:08.509557  411209 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 23:12:08.509567  411209 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 23:12:08.509574  411209 command_runner.go:130] > # certificate on any modification event.
	I0108 23:12:08.509578  411209 command_runner.go:130] > # metrics_cert = ""
	I0108 23:12:08.509586  411209 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 23:12:08.509594  411209 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 23:12:08.509598  411209 command_runner.go:130] > # metrics_key = ""
	I0108 23:12:08.509606  411209 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 23:12:08.509613  411209 command_runner.go:130] > [crio.tracing]
	I0108 23:12:08.509618  411209 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 23:12:08.509625  411209 command_runner.go:130] > # enable_tracing = false
	I0108 23:12:08.509630  411209 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 23:12:08.509637  411209 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 23:12:08.509643  411209 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 23:12:08.509652  411209 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 23:12:08.509662  411209 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 23:12:08.509668  411209 command_runner.go:130] > [crio.stats]
	I0108 23:12:08.509674  411209 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 23:12:08.509681  411209 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 23:12:08.509689  411209 command_runner.go:130] > # stats_collection_period = 0
	I0108 23:12:08.509773  411209 cni.go:84] Creating CNI manager for ""
	I0108 23:12:08.509783  411209 cni.go:136] 2 nodes found, recommending kindnet
	I0108 23:12:08.509793  411209 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 23:12:08.509814  411209 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-659947 NodeName:multinode-659947-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 23:12:08.509941  411209 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-659947-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 23:12:08.509999  411209 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-659947-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-659947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 23:12:08.510055  411209 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 23:12:08.520295  411209 command_runner.go:130] > kubeadm
	I0108 23:12:08.520318  411209 command_runner.go:130] > kubectl
	I0108 23:12:08.520322  411209 command_runner.go:130] > kubelet
	I0108 23:12:08.520347  411209 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 23:12:08.520409  411209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 23:12:08.528616  411209 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0108 23:12:08.544852  411209 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 23:12:08.560921  411209 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 23:12:08.564144  411209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:12:08.573913  411209 host.go:66] Checking if "multinode-659947" exists ...
	I0108 23:12:08.574107  411209 config.go:182] Loaded profile config "multinode-659947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:12:08.574148  411209 start.go:304] JoinCluster: &{Name:multinode-659947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-659947 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:12:08.574233  411209 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 23:12:08.574281  411209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:12:08.591338  411209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa Username:docker}
	I0108 23:12:08.734879  411209 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4owjgt.vbb655kqbffiksah --discovery-token-ca-cert-hash sha256:1f39001c59ef0575e1bc9780ffa85edc1681f20574bd7111835c2c8563c3cd8d 
	I0108 23:12:08.739151  411209 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 23:12:08.739206  411209 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4owjgt.vbb655kqbffiksah --discovery-token-ca-cert-hash sha256:1f39001c59ef0575e1bc9780ffa85edc1681f20574bd7111835c2c8563c3cd8d --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-659947-m02"
	I0108 23:12:08.773406  411209 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 23:12:08.801580  411209 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0108 23:12:08.801607  411209 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 23:12:08.801612  411209 command_runner.go:130] > OS: Linux
	I0108 23:12:08.801618  411209 command_runner.go:130] > CGROUPS_CPU: enabled
	I0108 23:12:08.801624  411209 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0108 23:12:08.801632  411209 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0108 23:12:08.801639  411209 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0108 23:12:08.801647  411209 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0108 23:12:08.801656  411209 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0108 23:12:08.801670  411209 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0108 23:12:08.801681  411209 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0108 23:12:08.801695  411209 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0108 23:12:08.882139  411209 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 23:12:08.882170  411209 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 23:12:08.909762  411209 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 23:12:08.909889  411209 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 23:12:08.909897  411209 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 23:12:08.987114  411209 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 23:12:11.001414  411209 command_runner.go:130] > This node has joined the cluster:
	I0108 23:12:11.001445  411209 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 23:12:11.001455  411209 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 23:12:11.001464  411209 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 23:12:11.004596  411209 command_runner.go:130] ! W0108 23:12:08.772937    1111 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 23:12:11.004644  411209 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 23:12:11.004658  411209 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 23:12:11.004679  411209 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4owjgt.vbb655kqbffiksah --discovery-token-ca-cert-hash sha256:1f39001c59ef0575e1bc9780ffa85edc1681f20574bd7111835c2c8563c3cd8d --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-659947-m02": (2.265460169s)
	I0108 23:12:11.004698  411209 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 23:12:11.176716  411209 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0108 23:12:11.176850  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-659947 minikube.k8s.io/updated_at=2024_01_08T23_12_11_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:12:11.248602  411209 command_runner.go:130] > node/multinode-659947-m02 labeled
	I0108 23:12:11.251196  411209 start.go:306] JoinCluster complete in 2.677041401s
	I0108 23:12:11.251234  411209 cni.go:84] Creating CNI manager for ""
	I0108 23:12:11.251242  411209 cni.go:136] 2 nodes found, recommending kindnet
	I0108 23:12:11.251338  411209 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 23:12:11.254909  411209 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 23:12:11.254945  411209 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I0108 23:12:11.254954  411209 command_runner.go:130] > Device: 37h/55d	Inode: 1048091     Links: 1
	I0108 23:12:11.254964  411209 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:12:11.254978  411209 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0108 23:12:11.254993  411209 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0108 23:12:11.255006  411209 command_runner.go:130] > Change: 2024-01-08 22:52:05.087229574 +0000
	I0108 23:12:11.255020  411209 command_runner.go:130] >  Birth: 2024-01-08 22:52:05.059227641 +0000
	I0108 23:12:11.255072  411209 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 23:12:11.255090  411209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 23:12:11.272271  411209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 23:12:11.491666  411209 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 23:12:11.491697  411209 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 23:12:11.491702  411209 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 23:12:11.491708  411209 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 23:12:11.492029  411209 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:12:11.492248  411209 kapi.go:59] client config for multinode-659947: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.key", CAFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:12:11.492591  411209 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 23:12:11.492599  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:11.492607  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:11.492613  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:11.494700  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:11.494718  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:11.494725  411209 round_trippers.go:580]     Audit-Id: d2211b53-1523-4582-bb78-23af2a2bf775
	I0108 23:12:11.494731  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:11.494739  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:11.494744  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:11.494749  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:11.494754  411209 round_trippers.go:580]     Content-Length: 291
	I0108 23:12:11.494762  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:11 GMT
	I0108 23:12:11.494783  411209 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1043e7d7-0de0-4829-9106-70235d9b6dea","resourceVersion":"445","creationTimestamp":"2024-01-08T23:11:09Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 23:12:11.494873  411209 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-659947" context rescaled to 1 replicas
	I0108 23:12:11.494902  411209 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 23:12:11.498399  411209 out.go:177] * Verifying Kubernetes components...
	I0108 23:12:11.500038  411209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:12:11.511401  411209 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:12:11.511723  411209 kapi.go:59] client config for multinode-659947: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/profiles/multinode-659947/client.key", CAFile:"/home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:12:11.512026  411209 node_ready.go:35] waiting up to 6m0s for node "multinode-659947-m02" to be "Ready" ...
	I0108 23:12:11.512128  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:11.512142  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:11.512155  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:11.512174  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:11.514590  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:11.514614  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:11.514624  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:11 GMT
	I0108 23:12:11.514632  411209 round_trippers.go:580]     Audit-Id: 5444b3f8-10cd-4aec-b276-090cd5fb4436
	I0108 23:12:11.514639  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:11.514650  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:11.514660  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:11.514677  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:11.515004  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:12.012705  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:12.012728  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:12.012736  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:12.012742  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:12.015477  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:12.015506  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:12.015521  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:12.015532  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:12.015540  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:12.015549  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:12.015559  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:12 GMT
	I0108 23:12:12.015575  411209 round_trippers.go:580]     Audit-Id: f3075037-24b3-469b-b17d-e079c082baee
	I0108 23:12:12.015697  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:12.512489  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:12.512513  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:12.512522  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:12.512528  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:12.515159  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:12.515185  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:12.515195  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:12.515201  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:12.515206  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:12.515211  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:12.515217  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:12 GMT
	I0108 23:12:12.515225  411209 round_trippers.go:580]     Audit-Id: 2f801a8c-126e-4a01-84eb-a7a4d838e7e1
	I0108 23:12:12.515416  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:13.012600  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:13.012635  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:13.012648  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:13.012658  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:13.015454  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:13.015476  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:13.015487  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:13 GMT
	I0108 23:12:13.015495  411209 round_trippers.go:580]     Audit-Id: c4428c8a-ca2a-40ef-9f07-4c53e73c7bab
	I0108 23:12:13.015503  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:13.015511  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:13.015519  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:13.015527  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:13.015710  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:13.512221  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:13.512246  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:13.512254  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:13.512261  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:13.514566  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:13.514590  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:13.514597  411209 round_trippers.go:580]     Audit-Id: 129d38e5-79cb-47ca-9d79-1fb9168ccf26
	I0108 23:12:13.514603  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:13.514613  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:13.514618  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:13.514624  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:13.514632  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:13 GMT
	I0108 23:12:13.514786  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:13.515108  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:14.012368  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:14.012392  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:14.012400  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:14.012406  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:14.014830  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:14.014854  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:14.014864  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:14.014872  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:14 GMT
	I0108 23:12:14.014879  411209 round_trippers.go:580]     Audit-Id: 9a3ad95a-c972-4e52-9e5d-e041dd3afe32
	I0108 23:12:14.014887  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:14.014894  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:14.014902  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:14.015627  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:14.512651  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:14.512673  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:14.512682  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:14.512688  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:14.515117  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:14.515146  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:14.515154  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:14.515160  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:14.515165  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:14 GMT
	I0108 23:12:14.515170  411209 round_trippers.go:580]     Audit-Id: de24497d-67f4-4bb3-967f-dfa7804f6264
	I0108 23:12:14.515175  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:14.515180  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:14.515360  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:15.013110  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:15.013138  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:15.013149  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:15.013157  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:15.015668  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:15.015697  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:15.015707  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:15.015713  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:15.015719  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:15.015729  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:15 GMT
	I0108 23:12:15.015737  411209 round_trippers.go:580]     Audit-Id: 2f2fa3d3-ed5f-4574-ad89-34443ddb350e
	I0108 23:12:15.015748  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:15.015904  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:15.512431  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:15.512474  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:15.512483  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:15.512490  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:15.514942  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:15.514966  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:15.514980  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:15.514989  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:15.514996  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:15.515003  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:15 GMT
	I0108 23:12:15.515011  411209 round_trippers.go:580]     Audit-Id: 9fda1c1c-f88c-437c-9ead-0586d1b9ff79
	I0108 23:12:15.515025  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:15.515160  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:15.515506  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:16.012711  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:16.012732  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:16.012741  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:16.012747  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:16.015181  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:16.015212  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:16.015223  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:16.015231  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:16.015240  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:16.015250  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:16 GMT
	I0108 23:12:16.015274  411209 round_trippers.go:580]     Audit-Id: a1b02e11-26c4-4824-80c0-3de337b33011
	I0108 23:12:16.015287  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:16.015450  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:16.513094  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:16.513117  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:16.513125  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:16.513131  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:16.515599  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:16.515621  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:16.515628  411209 round_trippers.go:580]     Audit-Id: 5e9f534d-624b-4987-8fcf-e3d212afb3ec
	I0108 23:12:16.515634  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:16.515642  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:16.515650  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:16.515658  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:16.515667  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:16 GMT
	I0108 23:12:16.515861  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:17.012371  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:17.012399  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:17.012408  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:17.012415  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:17.014914  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:17.014942  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:17.014953  411209 round_trippers.go:580]     Audit-Id: f05c00df-35c7-461d-85d2-e6c576538d04
	I0108 23:12:17.014962  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:17.014974  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:17.014983  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:17.014990  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:17.014995  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:17 GMT
	I0108 23:12:17.015163  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:17.512305  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:17.512331  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:17.512339  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:17.512345  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:17.514822  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:17.514846  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:17.514857  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:17.514865  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:17.514871  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:17 GMT
	I0108 23:12:17.514880  411209 round_trippers.go:580]     Audit-Id: 51a5d24b-441c-49a7-8f04-b119ba3785d7
	I0108 23:12:17.514886  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:17.514894  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:17.515031  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:18.012624  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:18.012649  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:18.012657  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:18.012663  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:18.015140  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:18.015170  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:18.015181  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:18.015189  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:18 GMT
	I0108 23:12:18.015197  411209 round_trippers.go:580]     Audit-Id: 49093770-e505-47b5-b4cd-745d94ab2538
	I0108 23:12:18.015207  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:18.015215  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:18.015225  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:18.015364  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:18.015685  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:18.512526  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:18.512547  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:18.512555  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:18.512561  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:18.515155  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:18.515175  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:18.515184  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:18 GMT
	I0108 23:12:18.515192  411209 round_trippers.go:580]     Audit-Id: 432fd13e-aea1-402c-b713-2221e46d91ab
	I0108 23:12:18.515199  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:18.515206  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:18.515214  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:18.515227  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:18.515393  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:19.013051  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:19.013077  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:19.013086  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:19.013097  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:19.015567  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:19.015604  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:19.015617  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:19.015626  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:19.015636  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:19.015646  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:19.015661  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:19 GMT
	I0108 23:12:19.015670  411209 round_trippers.go:580]     Audit-Id: 1c8f0a39-8e08-4b67-90d1-a713eff21273
	I0108 23:12:19.015823  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:19.512350  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:19.512378  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:19.512386  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:19.512393  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:19.515253  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:19.515296  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:19.515305  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:19 GMT
	I0108 23:12:19.515311  411209 round_trippers.go:580]     Audit-Id: c76d8cb9-2ea3-4138-9f6d-f050a1ede0f4
	I0108 23:12:19.515317  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:19.515322  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:19.515327  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:19.515333  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:19.515489  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:20.013151  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:20.013178  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:20.013187  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:20.013193  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:20.015654  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:20.015683  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:20.015694  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:20.015704  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:20.015712  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:20.015721  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:20 GMT
	I0108 23:12:20.015730  411209 round_trippers.go:580]     Audit-Id: ecff8645-1c3e-4da8-a904-4fbb5a67638f
	I0108 23:12:20.015740  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:20.015868  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:20.016171  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:20.512421  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:20.512446  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:20.512455  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:20.512462  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:20.514872  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:20.514899  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:20.514911  411209 round_trippers.go:580]     Audit-Id: 4cf8bfbc-7142-447d-b908-d17ebe19efeb
	I0108 23:12:20.514920  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:20.514928  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:20.514934  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:20.514939  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:20.514944  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:20 GMT
	I0108 23:12:20.515090  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"485","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 23:12:21.012258  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:21.012284  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:21.012292  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:21.012298  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:21.015372  411209 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:12:21.015396  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:21.015403  411209 round_trippers.go:580]     Audit-Id: 9ef8d087-595d-442d-bc7c-38155943c3ed
	I0108 23:12:21.015409  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:21.015415  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:21.015420  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:21.015425  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:21.015430  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:21 GMT
	I0108 23:12:21.015599  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:21.512243  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:21.512270  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:21.512280  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:21.512289  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:21.514678  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:21.514700  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:21.514710  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:21.514717  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:21 GMT
	I0108 23:12:21.514725  411209 round_trippers.go:580]     Audit-Id: 561021d0-3f24-466c-a931-d92ea0a47a3b
	I0108 23:12:21.514732  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:21.514750  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:21.514758  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:21.514964  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:22.012981  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:22.013011  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:22.013021  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:22.013027  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:22.015627  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:22.015665  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:22.015678  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:22.015688  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:22.015696  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:22.015705  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:22 GMT
	I0108 23:12:22.015714  411209 round_trippers.go:580]     Audit-Id: ee934f65-a60b-4b23-8bed-3fc2e566017a
	I0108 23:12:22.015720  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:22.015844  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:22.016241  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:22.512535  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:22.512556  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:22.512564  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:22.512570  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:22.515102  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:22.515125  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:22.515133  411209 round_trippers.go:580]     Audit-Id: 67db14bb-81b8-4001-8776-276037c8f7f3
	I0108 23:12:22.515139  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:22.515145  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:22.515150  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:22.515156  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:22.515165  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:22 GMT
	I0108 23:12:22.515338  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:23.013069  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:23.013098  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:23.013106  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:23.013112  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:23.015691  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:23.015716  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:23.015725  411209 round_trippers.go:580]     Audit-Id: e3612648-a1ba-4aa2-a9bb-820dac8f3b35
	I0108 23:12:23.015736  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:23.015743  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:23.015751  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:23.015761  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:23.015769  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:23 GMT
	I0108 23:12:23.015903  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:23.512638  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:23.512662  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:23.512671  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:23.512677  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:23.515041  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:23.515061  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:23.515068  411209 round_trippers.go:580]     Audit-Id: 362bda7f-0786-4317-b372-c55cda50da59
	I0108 23:12:23.515074  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:23.515079  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:23.515084  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:23.515089  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:23.515094  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:23 GMT
	I0108 23:12:23.515282  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:24.012664  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:24.012691  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:24.012699  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:24.012706  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:24.015288  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:24.015314  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:24.015324  411209 round_trippers.go:580]     Audit-Id: 81fa0056-741a-454a-b43d-4d5bf9acacbf
	I0108 23:12:24.015333  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:24.015344  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:24.015357  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:24.015366  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:24.015371  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:24 GMT
	I0108 23:12:24.015509  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:24.513121  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:24.513144  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:24.513153  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:24.513159  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:24.515345  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:24.515369  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:24.515382  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:24.515390  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:24 GMT
	I0108 23:12:24.515398  411209 round_trippers.go:580]     Audit-Id: 7b3354da-552f-445f-8e8e-f41860b47819
	I0108 23:12:24.515406  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:24.515417  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:24.515429  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:24.515583  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:24.516000  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:25.013257  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:25.013280  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:25.013288  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:25.013294  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:25.015726  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:25.015749  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:25.015757  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:25.015763  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:25 GMT
	I0108 23:12:25.015768  411209 round_trippers.go:580]     Audit-Id: 9da3bc75-e83a-4e14-a5f5-f5cde8771722
	I0108 23:12:25.015774  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:25.015782  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:25.015793  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:25.015934  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:25.512562  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:25.512587  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:25.512604  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:25.512614  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:25.515211  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:25.515243  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:25.515275  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:25 GMT
	I0108 23:12:25.515286  411209 round_trippers.go:580]     Audit-Id: e75c1a96-8c2c-4d03-8af9-98efdd79775c
	I0108 23:12:25.515295  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:25.515304  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:25.515313  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:25.515322  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:25.515484  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:26.013069  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:26.013093  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:26.013101  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:26.013107  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:26.015575  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:26.015596  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:26.015603  411209 round_trippers.go:580]     Audit-Id: 1ec7e200-4fe0-45c8-8c7a-5a7d638e28a9
	I0108 23:12:26.015609  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:26.015615  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:26.015620  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:26.015625  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:26.015632  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:26 GMT
	I0108 23:12:26.015814  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:26.512516  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:26.512541  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:26.512555  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:26.512567  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:26.514714  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:26.514740  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:26.514750  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:26.514759  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:26.514767  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:26.514772  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:26 GMT
	I0108 23:12:26.514777  411209 round_trippers.go:580]     Audit-Id: fa063db5-2b0c-4def-8af2-daf5d6720840
	I0108 23:12:26.514783  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:26.514919  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:27.012490  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:27.012521  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:27.012531  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:27.012538  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:27.015153  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:27.015179  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:27.015189  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:27.015198  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:27.015205  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:27 GMT
	I0108 23:12:27.015213  411209 round_trippers.go:580]     Audit-Id: 5ae24e86-328e-4a38-9cee-147c3efb14a3
	I0108 23:12:27.015220  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:27.015231  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:27.015410  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:27.015772  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:27.512305  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:27.512327  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:27.512335  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:27.512342  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:27.514723  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:27.514820  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:27.514836  411209 round_trippers.go:580]     Audit-Id: 482e9051-98fd-4792-8156-c97c7eec770b
	I0108 23:12:27.514846  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:27.514853  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:27.514863  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:27.514880  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:27.514892  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:27 GMT
	I0108 23:12:27.515037  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:28.012532  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:28.012560  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:28.012571  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:28.012577  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:28.015027  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:28.015053  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:28.015064  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:28.015076  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:28.015084  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:28.015092  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:28 GMT
	I0108 23:12:28.015100  411209 round_trippers.go:580]     Audit-Id: 7deadbec-3f93-41d3-bc61-6bff378ff24b
	I0108 23:12:28.015109  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:28.015246  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:28.512514  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:28.512536  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:28.512544  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:28.512552  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:28.514710  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:28.514732  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:28.514741  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:28.514747  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:28.514753  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:28.514758  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:28 GMT
	I0108 23:12:28.514763  411209 round_trippers.go:580]     Audit-Id: ba18bbbc-df59-4c35-9e66-716394342fe7
	I0108 23:12:28.514768  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:28.514899  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:29.012391  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:29.012429  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:29.012440  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:29.012448  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:29.014872  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:29.014900  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:29.014912  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:29.014922  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:29.014930  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:29.014939  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:29 GMT
	I0108 23:12:29.014947  411209 round_trippers.go:580]     Audit-Id: 8931e3b7-40fc-4ef7-986a-ae17e068570b
	I0108 23:12:29.014957  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:29.015110  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:29.512716  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:29.512740  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:29.512749  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:29.512755  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:29.515277  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:29.515303  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:29.515312  411209 round_trippers.go:580]     Audit-Id: d0aedfbd-da4f-455c-8ce1-f2f6d3f3c8f0
	I0108 23:12:29.515322  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:29.515331  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:29.515339  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:29.515350  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:29.515363  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:29 GMT
	I0108 23:12:29.515508  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:29.515934  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:30.013164  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:30.013186  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:30.013194  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:30.013202  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:30.015663  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:30.015684  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:30.015691  411209 round_trippers.go:580]     Audit-Id: 81a62974-977b-438d-aafd-69493e66f5f3
	I0108 23:12:30.015697  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:30.015702  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:30.015708  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:30.015716  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:30.015726  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:30 GMT
	I0108 23:12:30.015912  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:30.513134  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:30.513158  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:30.513166  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:30.513173  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:30.515161  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:12:30.515187  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:30.515199  411209 round_trippers.go:580]     Audit-Id: b70d53b8-5b9c-460c-a400-55266d4f48b7
	I0108 23:12:30.515208  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:30.515217  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:30.515227  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:30.515238  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:30.515244  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:30 GMT
	I0108 23:12:30.515405  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:31.012617  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:31.012641  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:31.012650  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:31.012657  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:31.015311  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:31.015339  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:31.015349  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:31.015356  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:31 GMT
	I0108 23:12:31.015363  411209 round_trippers.go:580]     Audit-Id: 5e66a764-7888-4f1e-bb46-28a4e2a1f709
	I0108 23:12:31.015371  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:31.015380  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:31.015391  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:31.015533  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:31.513190  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:31.513215  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:31.513226  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:31.513234  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:31.515623  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:31.515652  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:31.515661  411209 round_trippers.go:580]     Audit-Id: 4bd7ddb2-59e0-4e64-a729-dbd60ddedb08
	I0108 23:12:31.515669  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:31.515677  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:31.515685  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:31.515696  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:31.515705  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:31 GMT
	I0108 23:12:31.515882  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:31.516237  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:32.012930  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:32.012954  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:32.012963  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:32.012969  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:32.015336  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:32.015365  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:32.015375  411209 round_trippers.go:580]     Audit-Id: 836184bb-00e9-47d8-a3a0-3ee985506a85
	I0108 23:12:32.015381  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:32.015386  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:32.015391  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:32.015396  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:32.015404  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:32 GMT
	I0108 23:12:32.015564  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:32.512277  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:32.512300  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:32.512309  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:32.512315  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:32.514635  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:32.514659  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:32.514667  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:32.514672  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:32.514677  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:32.514683  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:32 GMT
	I0108 23:12:32.514687  411209 round_trippers.go:580]     Audit-Id: 7263a942-1345-4690-ba8e-97454475f2f2
	I0108 23:12:32.514693  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:32.514855  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:33.013221  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:33.013249  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:33.013261  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:33.013268  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:33.015657  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:33.015679  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:33.015686  411209 round_trippers.go:580]     Audit-Id: 0bd0b570-df38-4543-8bbc-d810f74c2936
	I0108 23:12:33.015695  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:33.015703  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:33.015711  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:33.015719  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:33.015736  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:33 GMT
	I0108 23:12:33.015859  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:33.512435  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:33.512459  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:33.512467  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:33.512474  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:33.514915  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:33.514945  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:33.514953  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:33.514962  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:33 GMT
	I0108 23:12:33.514975  411209 round_trippers.go:580]     Audit-Id: 50caa479-5863-4348-92f7-f4e25a5795bb
	I0108 23:12:33.514985  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:33.514994  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:33.515004  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:33.515160  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:34.012686  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:34.012711  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:34.012722  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:34.012728  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:34.015429  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:34.015458  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:34.015471  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:34.015480  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:34.015489  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:34.015497  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:34.015506  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:34 GMT
	I0108 23:12:34.015531  411209 round_trippers.go:580]     Audit-Id: dfd44640-4d0b-4722-a3db-d548fd1eb0aa
	I0108 23:12:34.015689  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:34.016041  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:34.512304  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:34.512338  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:34.512350  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:34.512362  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:34.514851  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:34.514883  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:34.514894  411209 round_trippers.go:580]     Audit-Id: 599d9b3c-3e88-42fc-abd9-f224e5b536ae
	I0108 23:12:34.514902  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:34.514910  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:34.514917  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:34.514929  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:34.514938  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:34 GMT
	I0108 23:12:34.515118  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:35.012310  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:35.012335  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:35.012344  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:35.012350  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:35.014897  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:35.014920  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:35.014929  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:35 GMT
	I0108 23:12:35.014939  411209 round_trippers.go:580]     Audit-Id: 4f593a5f-39db-4380-9041-09339fcde34c
	I0108 23:12:35.014948  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:35.014957  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:35.014965  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:35.014970  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:35.015177  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:35.512844  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:35.512867  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:35.512876  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:35.512882  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:35.515175  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:35.515199  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:35.515210  411209 round_trippers.go:580]     Audit-Id: d0075c7a-40f0-4720-ad94-fe14627f8f92
	I0108 23:12:35.515219  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:35.515226  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:35.515233  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:35.515239  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:35.515248  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:35 GMT
	I0108 23:12:35.515387  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:36.012656  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:36.012682  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:36.012691  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:36.012697  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:36.015354  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:36.015379  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:36.015386  411209 round_trippers.go:580]     Audit-Id: a7dafedb-7b13-4694-9c45-34a98da27941
	I0108 23:12:36.015392  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:36.015398  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:36.015403  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:36.015408  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:36.015413  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:36 GMT
	I0108 23:12:36.015589  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:36.512233  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:36.512261  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:36.512269  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:36.512276  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:36.514386  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:36.514404  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:36.514415  411209 round_trippers.go:580]     Audit-Id: 4b7e1abe-77c3-46b0-b84f-954911e8098f
	I0108 23:12:36.514425  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:36.514434  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:36.514446  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:36.514454  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:36.514460  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:36 GMT
	I0108 23:12:36.514605  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:36.515019  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:37.012232  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:37.012255  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:37.012266  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:37.012272  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:37.014741  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:37.014772  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:37.014784  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:37.014792  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:37.014801  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:37.014815  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:37 GMT
	I0108 23:12:37.014826  411209 round_trippers.go:580]     Audit-Id: b394b935-58b8-4451-9868-6ca922ce9ba5
	I0108 23:12:37.014835  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:37.015063  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:37.513102  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:37.513129  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:37.513138  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:37.513144  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:37.515556  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:37.515582  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:37.515592  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:37.515601  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:37.515610  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:37.515619  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:37 GMT
	I0108 23:12:37.515627  411209 round_trippers.go:580]     Audit-Id: 500a826b-592a-49d3-b5a0-a3db223c8668
	I0108 23:12:37.515633  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:37.515775  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:38.012413  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:38.012437  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:38.012446  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:38.012452  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:38.014701  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:38.014724  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:38.014733  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:38.014740  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:38.014745  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:38.014750  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:38.014755  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:38 GMT
	I0108 23:12:38.014760  411209 round_trippers.go:580]     Audit-Id: 2a299ffa-9d86-4452-b816-24d4c731313f
	I0108 23:12:38.014928  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:38.513119  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:38.513143  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:38.513152  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:38.513160  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:38.515371  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:38.515391  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:38.515398  411209 round_trippers.go:580]     Audit-Id: 03fac49b-f54a-4f34-ae8e-0c985bbd9d01
	I0108 23:12:38.515404  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:38.515409  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:38.515417  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:38.515422  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:38.515428  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:38 GMT
	I0108 23:12:38.515589  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:38.515928  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:39.012269  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:39.012290  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:39.012298  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:39.012305  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:39.015213  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:39.015237  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:39.015247  411209 round_trippers.go:580]     Audit-Id: 6dffe7a1-0e2c-4083-9824-9ee65c5524c7
	I0108 23:12:39.015253  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:39.015278  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:39.015287  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:39.015296  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:39.015308  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:39 GMT
	I0108 23:12:39.015436  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:39.513085  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:39.513107  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:39.513117  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:39.513124  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:39.515484  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:39.515508  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:39.515516  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:39.515521  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:39 GMT
	I0108 23:12:39.515527  411209 round_trippers.go:580]     Audit-Id: b4e914a2-d1f5-4885-91f5-d272bf4798f1
	I0108 23:12:39.515532  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:39.515540  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:39.515548  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:39.515675  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:40.012221  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:40.012245  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:40.012253  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:40.012260  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:40.014814  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:40.014843  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:40.014850  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:40.014856  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:40.014864  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:40 GMT
	I0108 23:12:40.014872  411209 round_trippers.go:580]     Audit-Id: 59c2b93f-a351-4329-be41-68c822004f1c
	I0108 23:12:40.014880  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:40.014887  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:40.015008  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:40.512606  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:40.512631  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:40.512639  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:40.512645  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:40.515071  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:40.515091  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:40.515098  411209 round_trippers.go:580]     Audit-Id: 194f5b20-c366-484b-91b1-f5293c9b869c
	I0108 23:12:40.515106  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:40.515114  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:40.515122  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:40.515132  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:40.515140  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:40 GMT
	I0108 23:12:40.515279  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:41.012501  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:41.012526  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:41.012535  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:41.012541  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:41.015185  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:41.015207  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:41.015216  411209 round_trippers.go:580]     Audit-Id: 85aa3c84-d615-4665-9a3a-3c29f5ea5fc3
	I0108 23:12:41.015225  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:41.015234  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:41.015243  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:41.015252  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:41.015276  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:41 GMT
	I0108 23:12:41.015397  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:41.015717  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:41.513048  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:41.513069  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:41.513080  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:41.513086  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:41.515411  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:41.515430  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:41.515440  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:41.515449  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:41.515457  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:41.515464  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:41 GMT
	I0108 23:12:41.515472  411209 round_trippers.go:580]     Audit-Id: 480cb155-10b1-4564-be2c-1f721563b31c
	I0108 23:12:41.515482  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:41.515626  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:42.013408  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:42.013438  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:42.013449  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:42.013457  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:42.015968  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:42.015994  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:42.016006  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:42.016015  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:42.016024  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:42 GMT
	I0108 23:12:42.016032  411209 round_trippers.go:580]     Audit-Id: d5024d6f-eb1c-43b2-bd1e-2a86c5888840
	I0108 23:12:42.016039  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:42.016048  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:42.016208  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:42.512966  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:42.512998  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:42.513007  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:42.513013  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:42.515325  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:42.515345  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:42.515353  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:42.515358  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:42.515364  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:42.515372  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:42 GMT
	I0108 23:12:42.515380  411209 round_trippers.go:580]     Audit-Id: 269e2db4-da21-46f4-8c20-4b61ed6b3d01
	I0108 23:12:42.515388  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:42.515532  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:43.013221  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:43.013250  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.013259  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.013265  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.015707  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:43.015728  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.015735  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.015740  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.015746  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.015751  411209 round_trippers.go:580]     Audit-Id: 77b03409-3371-49b1-8973-05b3e5aefe79
	I0108 23:12:43.015757  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.015764  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.015958  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"506","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I0108 23:12:43.016299  411209 node_ready.go:58] node "multinode-659947-m02" has status "Ready":"False"
	I0108 23:12:43.512515  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:43.512538  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.512546  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.512552  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.515162  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:43.515187  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.515197  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.515204  411209 round_trippers.go:580]     Audit-Id: e8d47fb8-ce74-4cd6-9846-dd4517a1d121
	I0108 23:12:43.515212  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.515221  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.515233  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.515241  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.515385  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"530","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I0108 23:12:43.515726  411209 node_ready.go:49] node "multinode-659947-m02" has status "Ready":"True"
	I0108 23:12:43.515746  411209 node_ready.go:38] duration metric: took 32.003694835s waiting for node "multinode-659947-m02" to be "Ready" ...
	I0108 23:12:43.515760  411209 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:12:43.515830  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 23:12:43.515841  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.515851  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.515861  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.519055  411209 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:12:43.519075  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.519085  411209 round_trippers.go:580]     Audit-Id: 64d8dd95-3a0c-4310-938d-0af2b05b1632
	I0108 23:12:43.519093  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.519101  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.519110  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.519119  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.519136  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.519774  411209 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"530"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7vbqm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ab36954-d4e3-4e0c-8635-399567429001","resourceVersion":"440","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"10c73502-72ee-4bbf-af0e-2b9d1dc4670b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10c73502-72ee-4bbf-af0e-2b9d1dc4670b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0108 23:12:43.522975  411209 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7vbqm" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:43.523089  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7vbqm
	I0108 23:12:43.523099  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.523110  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.523119  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.524997  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:12:43.525014  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.525023  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.525030  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.525039  411209 round_trippers.go:580]     Audit-Id: 06efa82d-11cd-43d8-b960-ea5b908af9e2
	I0108 23:12:43.525047  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.525057  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.525067  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.525178  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7vbqm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ab36954-d4e3-4e0c-8635-399567429001","resourceVersion":"440","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"10c73502-72ee-4bbf-af0e-2b9d1dc4670b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10c73502-72ee-4bbf-af0e-2b9d1dc4670b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0108 23:12:43.525554  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:12:43.525567  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.525574  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.525580  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.527280  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:12:43.527300  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.527310  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.527318  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.527327  411209 round_trippers.go:580]     Audit-Id: 107e4a59-e9a2-418a-b258-24d8627ffb3e
	I0108 23:12:43.527339  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.527351  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.527362  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.527484  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:12:43.527756  411209 pod_ready.go:92] pod "coredns-5dd5756b68-7vbqm" in "kube-system" namespace has status "Ready":"True"
	I0108 23:12:43.527770  411209 pod_ready.go:81] duration metric: took 4.768995ms waiting for pod "coredns-5dd5756b68-7vbqm" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:43.527779  411209 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:43.527827  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-659947
	I0108 23:12:43.527834  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.527841  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.527847  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.529432  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:12:43.529450  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.529460  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.529466  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.529472  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.529479  411209 round_trippers.go:580]     Audit-Id: c8f33683-929e-4c54-a22e-52e8177aaa53
	I0108 23:12:43.529484  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.529493  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.529649  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-659947","namespace":"kube-system","uid":"4a1f5448-9a96-4c2d-b974-fc8604a23e20","resourceVersion":"307","creationTimestamp":"2024-01-08T23:11:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4836d88bf0fddab354d811e82e0bcaaf","kubernetes.io/config.mirror":"4836d88bf0fddab354d811e82e0bcaaf","kubernetes.io/config.seen":"2024-01-08T23:11:09.365606083Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0108 23:12:43.529955  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:12:43.529965  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.529972  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.529977  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.531607  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:12:43.531621  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.531627  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.531633  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.531638  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.531642  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.531648  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.531653  411209 round_trippers.go:580]     Audit-Id: 4a9182c7-c957-400a-9795-af3818949a45
	I0108 23:12:43.531782  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:12:43.532063  411209 pod_ready.go:92] pod "etcd-multinode-659947" in "kube-system" namespace has status "Ready":"True"
	I0108 23:12:43.532077  411209 pod_ready.go:81] duration metric: took 4.29313ms waiting for pod "etcd-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:43.532107  411209 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:43.532157  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-659947
	I0108 23:12:43.532164  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.532171  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.532177  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.533791  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:12:43.533805  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.533813  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.533819  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.533824  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.533829  411209 round_trippers.go:580]     Audit-Id: 95d7dee2-8dfc-4985-8fa3-224d3d64e7d3
	I0108 23:12:43.533837  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.533842  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.533984  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-659947","namespace":"kube-system","uid":"4091bb80-9af3-4a3a-864e-0a13751c0708","resourceVersion":"303","creationTimestamp":"2024-01-08T23:11:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"4c65b54013af928699c1cd97dd72acc7","kubernetes.io/config.mirror":"4c65b54013af928699c1cd97dd72acc7","kubernetes.io/config.seen":"2024-01-08T23:11:09.365607797Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0108 23:12:43.534331  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:12:43.534353  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.534360  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.534366  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.535862  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:12:43.535876  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.535883  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.535890  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.535895  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.535900  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.535913  411209 round_trippers.go:580]     Audit-Id: d2ef576e-8611-4dac-bcef-12ea6ce24506
	I0108 23:12:43.535922  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.536064  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:12:43.536320  411209 pod_ready.go:92] pod "kube-apiserver-multinode-659947" in "kube-system" namespace has status "Ready":"True"
	I0108 23:12:43.536333  411209 pod_ready.go:81] duration metric: took 4.216656ms waiting for pod "kube-apiserver-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:43.536340  411209 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:43.536383  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-659947
	I0108 23:12:43.536390  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.536397  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.536404  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.538020  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:12:43.538041  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.538051  411209 round_trippers.go:580]     Audit-Id: f788351b-1b4f-4eb7-b7f7-d077fba1cc41
	I0108 23:12:43.538060  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.538069  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.538084  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.538093  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.538104  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.538235  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-659947","namespace":"kube-system","uid":"99044a00-503b-4f39-aec8-d541a5d88b61","resourceVersion":"340","creationTimestamp":"2024-01-08T23:11:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"35d04abfdea411f0288eb18c4ccfb806","kubernetes.io/config.mirror":"35d04abfdea411f0288eb18c4ccfb806","kubernetes.io/config.seen":"2024-01-08T23:11:09.365600379Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0108 23:12:43.538606  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:12:43.538618  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.538625  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.538631  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.540115  411209 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:12:43.540129  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.540136  411209 round_trippers.go:580]     Audit-Id: 05bef519-d084-4e89-af44-28645d93bc7b
	I0108 23:12:43.540141  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.540147  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.540155  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.540165  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.540176  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.540315  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:12:43.540609  411209 pod_ready.go:92] pod "kube-controller-manager-multinode-659947" in "kube-system" namespace has status "Ready":"True"
	I0108 23:12:43.540624  411209 pod_ready.go:81] duration metric: took 4.277751ms waiting for pod "kube-controller-manager-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:43.540638  411209 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dz5gw" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:43.713030  411209 request.go:629] Waited for 172.322401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dz5gw
	I0108 23:12:43.713112  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dz5gw
	I0108 23:12:43.713119  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.713130  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.713144  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.715719  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:43.715746  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.715757  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.715774  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.715783  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.715792  411209 round_trippers.go:580]     Audit-Id: 17911aba-541c-4214-8fff-54c2d7d07705
	I0108 23:12:43.715801  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.715809  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.715924  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dz5gw","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9d44e89-2cfe-4a60-9266-3bcc869bf813","resourceVersion":"498","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d8964d22-9761-414a-9f1a-850b5da0c86f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8964d22-9761-414a-9f1a-850b5da0c86f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 23:12:43.912640  411209 request.go:629] Waited for 196.254212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:43.912710  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947-m02
	I0108 23:12:43.912715  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:43.912724  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:43.912730  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:43.915253  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:43.915315  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:43.915325  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:43 GMT
	I0108 23:12:43.915332  411209 round_trippers.go:580]     Audit-Id: 619dc07f-f0bc-4e5d-af46-4657d0112f66
	I0108 23:12:43.915340  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:43.915348  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:43.915358  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:43.915371  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:43.915513  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947-m02","uid":"4397fdc9-97e2-458b-8b29-d305289404dd","resourceVersion":"530","creationTimestamp":"2024-01-08T23:12:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_12_11_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:12:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I0108 23:12:43.915977  411209 pod_ready.go:92] pod "kube-proxy-dz5gw" in "kube-system" namespace has status "Ready":"True"
	I0108 23:12:43.916002  411209 pod_ready.go:81] duration metric: took 375.355102ms waiting for pod "kube-proxy-dz5gw" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:43.916015  411209 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rf4sd" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:44.112933  411209 request.go:629] Waited for 196.742662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4sd
	I0108 23:12:44.113017  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4sd
	I0108 23:12:44.113026  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:44.113035  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:44.113042  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:44.115566  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:44.115586  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:44.115592  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:44 GMT
	I0108 23:12:44.115598  411209 round_trippers.go:580]     Audit-Id: daf38727-d2d4-463d-98dc-92a72c4e9c7e
	I0108 23:12:44.115606  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:44.115615  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:44.115624  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:44.115634  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:44.115804  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rf4sd","generateName":"kube-proxy-","namespace":"kube-system","uid":"c616c195-de73-4c48-8660-a6d67916d665","resourceVersion":"409","creationTimestamp":"2024-01-08T23:11:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d8964d22-9761-414a-9f1a-850b5da0c86f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8964d22-9761-414a-9f1a-850b5da0c86f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0108 23:12:44.312620  411209 request.go:629] Waited for 196.31125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:12:44.312704  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:12:44.312711  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:44.312722  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:44.312731  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:44.314933  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:44.314956  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:44.314967  411209 round_trippers.go:580]     Audit-Id: dd8c0844-6862-4ea5-9f7b-d670484abda7
	I0108 23:12:44.314975  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:44.314983  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:44.314992  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:44.315000  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:44.315008  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:44 GMT
	I0108 23:12:44.315132  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:12:44.315467  411209 pod_ready.go:92] pod "kube-proxy-rf4sd" in "kube-system" namespace has status "Ready":"True"
	I0108 23:12:44.315485  411209 pod_ready.go:81] duration metric: took 399.38944ms waiting for pod "kube-proxy-rf4sd" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:44.315495  411209 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:44.513533  411209 request.go:629] Waited for 197.933111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659947
	I0108 23:12:44.513610  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659947
	I0108 23:12:44.513615  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:44.513623  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:44.513630  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:44.516169  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:44.516193  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:44.516203  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:44.516212  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:44.516220  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:44.516226  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:44 GMT
	I0108 23:12:44.516233  411209 round_trippers.go:580]     Audit-Id: 16221be3-8a1d-4735-8e4e-7efbf44d3616
	I0108 23:12:44.516241  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:44.516354  411209 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-659947","namespace":"kube-system","uid":"bca5adfe-3eb1-4ad1-a236-d9ce4c6db898","resourceVersion":"304","creationTimestamp":"2024-01-08T23:11:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6ccc1302a59a76635d1eec9e1e275773","kubernetes.io/config.mirror":"6ccc1302a59a76635d1eec9e1e275773","kubernetes.io/config.seen":"2024-01-08T23:11:09.365604859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:11:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0108 23:12:44.713140  411209 request.go:629] Waited for 196.395835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:12:44.713239  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-659947
	I0108 23:12:44.713249  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:44.713258  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:44.713270  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:44.715527  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:44.715557  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:44.715567  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:44.715574  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:44.715583  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:44 GMT
	I0108 23:12:44.715590  411209 round_trippers.go:580]     Audit-Id: f2331e8c-b99e-45cf-a3a6-9323758a3fbe
	I0108 23:12:44.715599  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:44.715609  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:44.715718  411209 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:11:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 23:12:44.716036  411209 pod_ready.go:92] pod "kube-scheduler-multinode-659947" in "kube-system" namespace has status "Ready":"True"
	I0108 23:12:44.716054  411209 pod_ready.go:81] duration metric: took 400.553649ms waiting for pod "kube-scheduler-multinode-659947" in "kube-system" namespace to be "Ready" ...
	I0108 23:12:44.716065  411209 pod_ready.go:38] duration metric: took 1.200293642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:12:44.716082  411209 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 23:12:44.716133  411209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:12:44.727751  411209 system_svc.go:56] duration metric: took 11.65992ms WaitForService to wait for kubelet.
	I0108 23:12:44.727782  411209 kubeadm.go:581] duration metric: took 33.232856115s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 23:12:44.727806  411209 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:12:44.913304  411209 request.go:629] Waited for 185.400538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0108 23:12:44.913391  411209 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0108 23:12:44.913396  411209 round_trippers.go:469] Request Headers:
	I0108 23:12:44.913404  411209 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:12:44.913412  411209 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:12:44.916183  411209 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:12:44.916214  411209 round_trippers.go:577] Response Headers:
	I0108 23:12:44.916224  411209 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:12:44.916232  411209 round_trippers.go:580]     Content-Type: application/json
	I0108 23:12:44.916240  411209 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36d543ad-49c1-46ea-8334-a904d8d7b024
	I0108 23:12:44.916248  411209 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b61b3e47-0db4-44cc-aca9-384067f6be81
	I0108 23:12:44.916256  411209 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:12:44 GMT
	I0108 23:12:44.916264  411209 round_trippers.go:580]     Audit-Id: ca2fb7ee-9d28-4103-b9e8-6ae578623d78
	I0108 23:12:44.916461  411209 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"531"},"items":[{"metadata":{"name":"multinode-659947","uid":"f54afe17-2624-4fc2-afd7-23e7025793a2","resourceVersion":"422","creationTimestamp":"2024-01-08T23:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-659947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_11_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12840 chars]
	I0108 23:12:44.916954  411209 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 23:12:44.916968  411209 node_conditions.go:123] node cpu capacity is 8
	I0108 23:12:44.916980  411209 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 23:12:44.916985  411209 node_conditions.go:123] node cpu capacity is 8
	I0108 23:12:44.916991  411209 node_conditions.go:105] duration metric: took 189.180876ms to run NodePressure ...
	I0108 23:12:44.917003  411209 start.go:228] waiting for startup goroutines ...
	I0108 23:12:44.917030  411209 start.go:242] writing updated cluster config ...
	I0108 23:12:44.917316  411209 ssh_runner.go:195] Run: rm -f paused
	I0108 23:12:44.966234  411209 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 23:12:44.969514  411209 out.go:177] * Done! kubectl is now configured to use "multinode-659947" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 23:11:54 multinode-659947 crio[962]: time="2024-01-08 23:11:54.298544682Z" level=info msg="Starting container: 209b655b2574856ac8c0de93ba9c1f476461f1f2f35eb987abf46e0f60a3e642" id=92bbe35f-5bab-4f75-a767-1ac79fd4b3f0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 23:11:54 multinode-659947 crio[962]: time="2024-01-08 23:11:54.301118387Z" level=info msg="Created container 22bcb72e792c78652f99fdeed66e3e60586a52c447553b7767a6a1f84e568c8a: kube-system/coredns-5dd5756b68-7vbqm/coredns" id=4d3fac1d-7301-4da6-940d-fb72f6f2214f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 23:11:54 multinode-659947 crio[962]: time="2024-01-08 23:11:54.301682861Z" level=info msg="Starting container: 22bcb72e792c78652f99fdeed66e3e60586a52c447553b7767a6a1f84e568c8a" id=86407a20-dd9a-4c82-9de5-c2ac9007fc32 name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 23:11:54 multinode-659947 crio[962]: time="2024-01-08 23:11:54.305465473Z" level=info msg="Started container" PID=2329 containerID=209b655b2574856ac8c0de93ba9c1f476461f1f2f35eb987abf46e0f60a3e642 description=kube-system/storage-provisioner/storage-provisioner id=92bbe35f-5bab-4f75-a767-1ac79fd4b3f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d7eec25e67f5f05d86d61091c32e63f0ce09a00b7a1426f1b21abdad062b452
	Jan 08 23:11:54 multinode-659947 crio[962]: time="2024-01-08 23:11:54.309178539Z" level=info msg="Started container" PID=2336 containerID=22bcb72e792c78652f99fdeed66e3e60586a52c447553b7767a6a1f84e568c8a description=kube-system/coredns-5dd5756b68-7vbqm/coredns id=86407a20-dd9a-4c82-9de5-c2ac9007fc32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=589bca5ca9701fa96378c446c4dffd3b2d1e29390eb588a9abb7e4d31ca19bf7
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.285434534Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-d8rhc/POD" id=98b0ae84-c720-4b7c-a13d-35cd52cbb381 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.285524099Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.300722230Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-d8rhc Namespace:default ID:0b60eed7c6d8b30c14cc2bb057a74a05717b71cb3862bd93c42d089e9a798b4d UID:98660fb1-73cf-4fd4-a4ff-42885e60215a NetNS:/var/run/netns/06bb1194-814c-499b-8b17-4a189f9fd0c3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.300762917Z" level=info msg="Adding pod default_busybox-5bc68d56bd-d8rhc to CNI network \"kindnet\" (type=ptp)"
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.310715444Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-d8rhc Namespace:default ID:0b60eed7c6d8b30c14cc2bb057a74a05717b71cb3862bd93c42d089e9a798b4d UID:98660fb1-73cf-4fd4-a4ff-42885e60215a NetNS:/var/run/netns/06bb1194-814c-499b-8b17-4a189f9fd0c3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.310837421Z" level=info msg="Checking pod default_busybox-5bc68d56bd-d8rhc for CNI network kindnet (type=ptp)"
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.341923500Z" level=info msg="Ran pod sandbox 0b60eed7c6d8b30c14cc2bb057a74a05717b71cb3862bd93c42d089e9a798b4d with infra container: default/busybox-5bc68d56bd-d8rhc/POD" id=98b0ae84-c720-4b7c-a13d-35cd52cbb381 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.343036883Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=ad1f14b1-d77d-481a-a3ea-762cdc569fad name=/runtime.v1.ImageService/ImageStatus
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.343323073Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=ad1f14b1-d77d-481a-a3ea-762cdc569fad name=/runtime.v1.ImageService/ImageStatus
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.344115378Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=497142d8-0379-4391-aba9-59d700676b6d name=/runtime.v1.ImageService/PullImage
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.345224152Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 08 23:12:46 multinode-659947 crio[962]: time="2024-01-08 23:12:46.602602808Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 08 23:12:47 multinode-659947 crio[962]: time="2024-01-08 23:12:47.051800617Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=497142d8-0379-4391-aba9-59d700676b6d name=/runtime.v1.ImageService/PullImage
	Jan 08 23:12:47 multinode-659947 crio[962]: time="2024-01-08 23:12:47.052930654Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2bcd9d71-3e08-4e91-927c-fce790ecba4f name=/runtime.v1.ImageService/ImageStatus
	Jan 08 23:12:47 multinode-659947 crio[962]: time="2024-01-08 23:12:47.053635404Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2bcd9d71-3e08-4e91-927c-fce790ecba4f name=/runtime.v1.ImageService/ImageStatus
	Jan 08 23:12:47 multinode-659947 crio[962]: time="2024-01-08 23:12:47.054439596Z" level=info msg="Creating container: default/busybox-5bc68d56bd-d8rhc/busybox" id=159f63f0-3cd2-4faf-bef3-456368212d95 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 23:12:47 multinode-659947 crio[962]: time="2024-01-08 23:12:47.054586099Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 23:12:47 multinode-659947 crio[962]: time="2024-01-08 23:12:47.131574848Z" level=info msg="Created container 4adc63da36c9fc9648409de03a248c21ca55b97c5461310a06088dac538e2eef: default/busybox-5bc68d56bd-d8rhc/busybox" id=159f63f0-3cd2-4faf-bef3-456368212d95 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 23:12:47 multinode-659947 crio[962]: time="2024-01-08 23:12:47.132156247Z" level=info msg="Starting container: 4adc63da36c9fc9648409de03a248c21ca55b97c5461310a06088dac538e2eef" id=7f4d0820-a4f4-475b-bc7f-9ba018087c38 name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 23:12:47 multinode-659947 crio[962]: time="2024-01-08 23:12:47.138037747Z" level=info msg="Started container" PID=2519 containerID=4adc63da36c9fc9648409de03a248c21ca55b97c5461310a06088dac538e2eef description=default/busybox-5bc68d56bd-d8rhc/busybox id=7f4d0820-a4f4-475b-bc7f-9ba018087c38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b60eed7c6d8b30c14cc2bb057a74a05717b71cb3862bd93c42d089e9a798b4d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4adc63da36c9f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   0b60eed7c6d8b       busybox-5bc68d56bd-d8rhc
	22bcb72e792c7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      57 seconds ago       Running             coredns                   0                   589bca5ca9701       coredns-5dd5756b68-7vbqm
	209b655b25748       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      57 seconds ago       Running             storage-provisioner       0                   6d7eec25e67f5       storage-provisioner
	fa0bb7ce6c713       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   639317ee0450a       kindnet-n2q2v
	4032380f149fb       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   1d14fc4f4daf4       kube-proxy-rf4sd
	3d4eff1707940       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   d06246d52dd01       kube-apiserver-multinode-659947
	968c1f02bf3db       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   61233d16ad151       etcd-multinode-659947
	86db72e4f28b9       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   f52426b4abbfb       kube-controller-manager-multinode-659947
	5b61a19e0e47d       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   a03e7eb8ac5e3       kube-scheduler-multinode-659947
	
	
	==> coredns [22bcb72e792c78652f99fdeed66e3e60586a52c447553b7767a6a1f84e568c8a] <==
	[INFO] 10.244.0.3:46912 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125094s
	[INFO] 10.244.1.2:56616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147789s
	[INFO] 10.244.1.2:44981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001703226s
	[INFO] 10.244.1.2:38741 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081464s
	[INFO] 10.244.1.2:39726 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076312s
	[INFO] 10.244.1.2:51558 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001178818s
	[INFO] 10.244.1.2:58157 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000065518s
	[INFO] 10.244.1.2:48735 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064562s
	[INFO] 10.244.1.2:39685 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054469s
	[INFO] 10.244.0.3:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100588s
	[INFO] 10.244.0.3:60212 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074874s
	[INFO] 10.244.0.3:52746 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059762s
	[INFO] 10.244.0.3:38224 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059813s
	[INFO] 10.244.1.2:43755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013638s
	[INFO] 10.244.1.2:59813 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089924s
	[INFO] 10.244.1.2:56142 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061902s
	[INFO] 10.244.1.2:43026 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065893s
	[INFO] 10.244.0.3:46176 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120575s
	[INFO] 10.244.0.3:33416 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150148s
	[INFO] 10.244.0.3:35960 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107357s
	[INFO] 10.244.0.3:60668 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089078s
	[INFO] 10.244.1.2:50593 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123139s
	[INFO] 10.244.1.2:54498 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000084332s
	[INFO] 10.244.1.2:50223 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059228s
	[INFO] 10.244.1.2:55791 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000049747s
	
	
	==> describe nodes <==
	Name:               multinode-659947
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-659947
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-659947
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T23_11_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 23:11:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-659947
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 23:12:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 23:11:53 +0000   Mon, 08 Jan 2024 23:11:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 23:11:53 +0000   Mon, 08 Jan 2024 23:11:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 23:11:53 +0000   Mon, 08 Jan 2024 23:11:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 23:11:53 +0000   Mon, 08 Jan 2024 23:11:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-659947
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 78ceb765676c4e64825260e2571a6b87
	  System UUID:                97ffc553-ba2b-42f0-8b54-b7bb935c7c2a
	  Boot ID:                    fd589fcb-cd24-44e5-9159-e7f1d22abeda
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-d8rhc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-7vbqm                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     89s
	  kube-system                 etcd-multinode-659947                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         102s
	  kube-system                 kindnet-n2q2v                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      89s
	  kube-system                 kube-apiserver-multinode-659947             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-controller-manager-multinode-659947    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-rf4sd                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-multinode-659947             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node multinode-659947 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node multinode-659947 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x8 over 108s)  kubelet          Node multinode-659947 status is now: NodeHasSufficientPID
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node multinode-659947 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node multinode-659947 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node multinode-659947 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s                  node-controller  Node multinode-659947 event: Registered Node multinode-659947 in Controller
	  Normal  NodeReady                58s                  kubelet          Node multinode-659947 status is now: NodeReady
	
	
	Name:               multinode-659947-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-659947-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-659947
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T23_12_11_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 23:12:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-659947-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 23:12:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 23:12:43 +0000   Mon, 08 Jan 2024 23:12:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 23:12:43 +0000   Mon, 08 Jan 2024 23:12:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 23:12:43 +0000   Mon, 08 Jan 2024 23:12:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 23:12:43 +0000   Mon, 08 Jan 2024 23:12:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-659947-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc673273e5dd4a07a0bf245af35cc8d1
	  System UUID:                c004f919-c578-4ee1-8978-c34c07d17383
	  Boot ID:                    fd589fcb-cd24-44e5-9159-e7f1d22abeda
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wpl2n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-sncnx               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      41s
	  kube-system                 kube-proxy-dz5gw            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38s                kube-proxy       
	  Normal  NodeHasSufficientMemory  41s (x5 over 42s)  kubelet          Node multinode-659947-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x5 over 42s)  kubelet          Node multinode-659947-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x5 over 42s)  kubelet          Node multinode-659947-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node multinode-659947-m02 event: Registered Node multinode-659947-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-659947-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.007357] FS-Cache: O-key=[8] 'b4a20f0200000000'
	[  +0.004934] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007782] FS-Cache: N-cookie d=000000003c359114{9p.inode} n=00000000801d8508
	[  +0.008729] FS-Cache: N-key=[8] 'b4a20f0200000000'
	[  +0.283629] FS-Cache: Duplicate cookie detected
	[  +0.004743] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.006795] FS-Cache: O-cookie d=000000003c359114{9p.inode} n=00000000dea4c64f
	[  +0.007371] FS-Cache: O-key=[8] 'baa20f0200000000'
	[  +0.004970] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007951] FS-Cache: N-cookie d=000000003c359114{9p.inode} n=00000000eafd936e
	[  +0.007348] FS-Cache: N-key=[8] 'baa20f0200000000'
	[Jan 8 23:03] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[  +1.007768] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[  +2.015863] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[  +4.127700] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[  +8.191395] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[ +16.126923] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	[Jan 8 23:04] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 9e 5f 07 6d 9a 92 56 6c 05 13 9f 31 08 00
	
	
	==> etcd [968c1f02bf3db327f7d77fea1efb6501cef699936b4aa4daea4734f8a5c4f027] <==
	{"level":"info","ts":"2024-01-08T23:11:04.148625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2024-01-08T23:11:04.148771Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2024-01-08T23:11:04.150397Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T23:11:04.150609Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T23:11:04.150702Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T23:11:04.150849Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-08T23:11:04.150896Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-08T23:11:04.674905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T23:11:04.674964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T23:11:04.674999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-08T23:11:04.675016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T23:11:04.67504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-08T23:11:04.675052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T23:11:04.675063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-08T23:11:04.676094Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-659947 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T23:11:04.676149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T23:11:04.676202Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T23:11:04.676193Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:11:04.676532Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T23:11:04.67672Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T23:11:04.676952Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:11:04.677064Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:11:04.677096Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:11:04.677936Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-01-08T23:11:04.678239Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:12:51 up  3:55,  0 users,  load average: 0.45, 0.75, 0.68
	Linux multinode-659947 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [fa0bb7ce6c7132ef0ade8752f5723e82387cad2522fb1d677a7960b93e97df32] <==
	I0108 23:11:22.947302       1 main.go:116] setting mtu 1500 for CNI 
	I0108 23:11:22.947315       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 23:11:22.947335       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 23:11:53.183543       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0108 23:11:53.192264       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 23:11:53.192303       1 main.go:227] handling current node
	I0108 23:12:03.207417       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 23:12:03.207468       1 main.go:227] handling current node
	I0108 23:12:13.218883       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 23:12:13.218911       1 main.go:227] handling current node
	I0108 23:12:13.218925       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 23:12:13.218932       1 main.go:250] Node multinode-659947-m02 has CIDR [10.244.1.0/24] 
	I0108 23:12:13.219108       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0108 23:12:23.231171       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 23:12:23.231195       1 main.go:227] handling current node
	I0108 23:12:23.231204       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 23:12:23.231209       1 main.go:250] Node multinode-659947-m02 has CIDR [10.244.1.0/24] 
	I0108 23:12:33.244778       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 23:12:33.244815       1 main.go:227] handling current node
	I0108 23:12:33.244825       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 23:12:33.244830       1 main.go:250] Node multinode-659947-m02 has CIDR [10.244.1.0/24] 
	I0108 23:12:43.249021       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 23:12:43.249046       1 main.go:227] handling current node
	I0108 23:12:43.249059       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 23:12:43.249065       1 main.go:250] Node multinode-659947-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3d4eff1707940b68e93b90a2576fc893e8801332608d067bdb425a14d9677da4] <==
	I0108 23:11:06.499497       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 23:11:06.499540       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 23:11:06.499546       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0108 23:11:06.499562       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0108 23:11:06.499722       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0108 23:11:06.500465       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 23:11:06.500957       1 controller.go:624] quota admission added evaluator for: namespaces
	I0108 23:11:06.504856       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 23:11:06.506107       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 23:11:06.551616       1 cache.go:39] Caches are synced for autoregister controller
	I0108 23:11:07.303845       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 23:11:07.308230       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 23:11:07.308249       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 23:11:07.699553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 23:11:07.736225       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 23:11:07.809398       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 23:11:07.815004       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0108 23:11:07.815968       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 23:11:07.819701       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 23:11:08.377535       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 23:11:09.309848       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 23:11:09.320801       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 23:11:09.330379       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 23:11:21.993249       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0108 23:11:22.157993       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [86db72e4f28b9d54c9934f5027b03fcb05832aaf26dfbbe5245927fbe42ba97a] <==
	I0108 23:11:53.595038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.711µs"
	I0108 23:11:53.606634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.047µs"
	I0108 23:11:54.617257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.471µs"
	I0108 23:11:54.636544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.067127ms"
	I0108 23:11:54.636753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.222µs"
	I0108 23:11:56.241622       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0108 23:12:10.809152       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659947-m02\" does not exist"
	I0108 23:12:10.815280       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-659947-m02" podCIDRs=["10.244.1.0/24"]
	I0108 23:12:10.820403       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dz5gw"
	I0108 23:12:10.822954       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-sncnx"
	I0108 23:12:11.243888       1 event.go:307] "Event occurred" object="multinode-659947-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-659947-m02 event: Registered Node multinode-659947-m02 in Controller"
	I0108 23:12:11.243974       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-659947-m02"
	I0108 23:12:43.145370       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-659947-m02"
	I0108 23:12:45.663684       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0108 23:12:45.670912       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wpl2n"
	I0108 23:12:45.675497       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-d8rhc"
	I0108 23:12:45.683635       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="20.265182ms"
	I0108 23:12:45.693840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.149499ms"
	I0108 23:12:45.693933       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.907µs"
	I0108 23:12:45.695600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="78.94µs"
	I0108 23:12:46.259354       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-wpl2n" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-wpl2n"
	I0108 23:12:47.397612       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.349363ms"
	I0108 23:12:47.397691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="36.054µs"
	I0108 23:12:47.723378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.88868ms"
	I0108 23:12:47.723509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="77.709µs"
	
	
	==> kube-proxy [4032380f149fbc08ca38a49aafc8e57cbb9540190fe464750c8e0e329aaae4f4] <==
	I0108 23:11:22.965618       1 server_others.go:69] "Using iptables proxy"
	I0108 23:11:22.974874       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0108 23:11:22.997229       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 23:11:22.998925       1 server_others.go:152] "Using iptables Proxier"
	I0108 23:11:22.998953       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 23:11:22.998960       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 23:11:22.998995       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 23:11:22.999220       1 server.go:846] "Version info" version="v1.28.4"
	I0108 23:11:22.999238       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 23:11:22.999800       1 config.go:188] "Starting service config controller"
	I0108 23:11:22.999823       1 config.go:97] "Starting endpoint slice config controller"
	I0108 23:11:22.999851       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 23:11:22.999853       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 23:11:22.999862       1 config.go:315] "Starting node config controller"
	I0108 23:11:22.999880       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 23:11:23.100750       1 shared_informer.go:318] Caches are synced for node config
	I0108 23:11:23.100794       1 shared_informer.go:318] Caches are synced for service config
	I0108 23:11:23.100801       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5b61a19e0e47d406696203ba6158ed15666e422eb8311ed35aba49aadfe5db3f] <==
	W0108 23:11:06.471030       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 23:11:06.471156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 23:11:06.470960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 23:11:06.471174       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 23:11:06.470984       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 23:11:06.471192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 23:11:06.470939       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 23:11:06.471209       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 23:11:06.471062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 23:11:06.471226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 23:11:06.471064       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 23:11:06.471242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 23:11:06.471076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 23:11:06.471282       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 23:11:07.294164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 23:11:07.294203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 23:11:07.294202       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 23:11:07.294226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 23:11:07.310903       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 23:11:07.310935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 23:11:07.462717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 23:11:07.462747       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 23:11:07.516337       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 23:11:07.516371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0108 23:11:08.064369       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 23:11:22 multinode-659947 kubelet[1594]: I0108 23:11:22.360891    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c616c195-de73-4c48-8660-a6d67916d665-kube-proxy\") pod \"kube-proxy-rf4sd\" (UID: \"c616c195-de73-4c48-8660-a6d67916d665\") " pod="kube-system/kube-proxy-rf4sd"
	Jan 08 23:11:22 multinode-659947 kubelet[1594]: I0108 23:11:22.360916    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c616c195-de73-4c48-8660-a6d67916d665-lib-modules\") pod \"kube-proxy-rf4sd\" (UID: \"c616c195-de73-4c48-8660-a6d67916d665\") " pod="kube-system/kube-proxy-rf4sd"
	Jan 08 23:11:22 multinode-659947 kubelet[1594]: I0108 23:11:22.360948    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1abbdfe4-e966-4c67-bcb8-431c9f4402e3-cni-cfg\") pod \"kindnet-n2q2v\" (UID: \"1abbdfe4-e966-4c67-bcb8-431c9f4402e3\") " pod="kube-system/kindnet-n2q2v"
	Jan 08 23:11:22 multinode-659947 kubelet[1594]: I0108 23:11:22.360973    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1abbdfe4-e966-4c67-bcb8-431c9f4402e3-lib-modules\") pod \"kindnet-n2q2v\" (UID: \"1abbdfe4-e966-4c67-bcb8-431c9f4402e3\") " pod="kube-system/kindnet-n2q2v"
	Jan 08 23:11:22 multinode-659947 kubelet[1594]: I0108 23:11:22.361000    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1abbdfe4-e966-4c67-bcb8-431c9f4402e3-xtables-lock\") pod \"kindnet-n2q2v\" (UID: \"1abbdfe4-e966-4c67-bcb8-431c9f4402e3\") " pod="kube-system/kindnet-n2q2v"
	Jan 08 23:11:22 multinode-659947 kubelet[1594]: I0108 23:11:22.361038    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpql9\" (UniqueName: \"kubernetes.io/projected/1abbdfe4-e966-4c67-bcb8-431c9f4402e3-kube-api-access-wpql9\") pod \"kindnet-n2q2v\" (UID: \"1abbdfe4-e966-4c67-bcb8-431c9f4402e3\") " pod="kube-system/kindnet-n2q2v"
	Jan 08 23:11:22 multinode-659947 kubelet[1594]: W0108 23:11:22.664191    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc/crio-1d14fc4f4daf430b1d723c6de5f370ee8a4908f6a3057e96d54004ba8e1bd87b WatchSource:0}: Error finding container 1d14fc4f4daf430b1d723c6de5f370ee8a4908f6a3057e96d54004ba8e1bd87b: Status 404 returned error can't find the container with id 1d14fc4f4daf430b1d723c6de5f370ee8a4908f6a3057e96d54004ba8e1bd87b
	Jan 08 23:11:22 multinode-659947 kubelet[1594]: W0108 23:11:22.664450    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc/crio-639317ee0450aa864412b5fb98d9831a7ce4687ee3c47dabd564bce01e47a22e WatchSource:0}: Error finding container 639317ee0450aa864412b5fb98d9831a7ce4687ee3c47dabd564bce01e47a22e: Status 404 returned error can't find the container with id 639317ee0450aa864412b5fb98d9831a7ce4687ee3c47dabd564bce01e47a22e
	Jan 08 23:11:23 multinode-659947 kubelet[1594]: I0108 23:11:23.560947    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-n2q2v" podStartSLOduration=1.5608998029999999 podCreationTimestamp="2024-01-08 23:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 23:11:23.560624883 +0000 UTC m=+14.274204246" watchObservedRunningTime="2024-01-08 23:11:23.560899803 +0000 UTC m=+14.274479166"
	Jan 08 23:11:23 multinode-659947 kubelet[1594]: I0108 23:11:23.570113    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rf4sd" podStartSLOduration=1.5700667670000001 podCreationTimestamp="2024-01-08 23:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 23:11:23.569929715 +0000 UTC m=+14.283509113" watchObservedRunningTime="2024-01-08 23:11:23.570066767 +0000 UTC m=+14.283646132"
	Jan 08 23:11:53 multinode-659947 kubelet[1594]: I0108 23:11:53.568535    1594 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 08 23:11:53 multinode-659947 kubelet[1594]: I0108 23:11:53.595364    1594 topology_manager.go:215] "Topology Admit Handler" podUID="5ab36954-d4e3-4e0c-8635-399567429001" podNamespace="kube-system" podName="coredns-5dd5756b68-7vbqm"
	Jan 08 23:11:53 multinode-659947 kubelet[1594]: I0108 23:11:53.596585    1594 topology_manager.go:215] "Topology Admit Handler" podUID="812cadd0-ea9b-4733-80f2-235d4f66e583" podNamespace="kube-system" podName="storage-provisioner"
	Jan 08 23:11:53 multinode-659947 kubelet[1594]: I0108 23:11:53.787868    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/812cadd0-ea9b-4733-80f2-235d4f66e583-tmp\") pod \"storage-provisioner\" (UID: \"812cadd0-ea9b-4733-80f2-235d4f66e583\") " pod="kube-system/storage-provisioner"
	Jan 08 23:11:53 multinode-659947 kubelet[1594]: I0108 23:11:53.787919    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ab36954-d4e3-4e0c-8635-399567429001-config-volume\") pod \"coredns-5dd5756b68-7vbqm\" (UID: \"5ab36954-d4e3-4e0c-8635-399567429001\") " pod="kube-system/coredns-5dd5756b68-7vbqm"
	Jan 08 23:11:53 multinode-659947 kubelet[1594]: I0108 23:11:53.787943    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjpnw\" (UniqueName: \"kubernetes.io/projected/812cadd0-ea9b-4733-80f2-235d4f66e583-kube-api-access-sjpnw\") pod \"storage-provisioner\" (UID: \"812cadd0-ea9b-4733-80f2-235d4f66e583\") " pod="kube-system/storage-provisioner"
	Jan 08 23:11:53 multinode-659947 kubelet[1594]: I0108 23:11:53.787963    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcrvv\" (UniqueName: \"kubernetes.io/projected/5ab36954-d4e3-4e0c-8635-399567429001-kube-api-access-hcrvv\") pod \"coredns-5dd5756b68-7vbqm\" (UID: \"5ab36954-d4e3-4e0c-8635-399567429001\") " pod="kube-system/coredns-5dd5756b68-7vbqm"
	Jan 08 23:11:54 multinode-659947 kubelet[1594]: W0108 23:11:54.236321    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc/crio-6d7eec25e67f5f05d86d61091c32e63f0ce09a00b7a1426f1b21abdad062b452 WatchSource:0}: Error finding container 6d7eec25e67f5f05d86d61091c32e63f0ce09a00b7a1426f1b21abdad062b452: Status 404 returned error can't find the container with id 6d7eec25e67f5f05d86d61091c32e63f0ce09a00b7a1426f1b21abdad062b452
	Jan 08 23:11:54 multinode-659947 kubelet[1594]: W0108 23:11:54.236637    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc/crio-589bca5ca9701fa96378c446c4dffd3b2d1e29390eb588a9abb7e4d31ca19bf7 WatchSource:0}: Error finding container 589bca5ca9701fa96378c446c4dffd3b2d1e29390eb588a9abb7e4d31ca19bf7: Status 404 returned error can't find the container with id 589bca5ca9701fa96378c446c4dffd3b2d1e29390eb588a9abb7e4d31ca19bf7
	Jan 08 23:11:54 multinode-659947 kubelet[1594]: I0108 23:11:54.617381    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-7vbqm" podStartSLOduration=32.617325398 podCreationTimestamp="2024-01-08 23:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 23:11:54.616980295 +0000 UTC m=+45.330559660" watchObservedRunningTime="2024-01-08 23:11:54.617325398 +0000 UTC m=+45.330904765"
	Jan 08 23:11:54 multinode-659947 kubelet[1594]: I0108 23:11:54.639867    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.639813933 podCreationTimestamp="2024-01-08 23:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 23:11:54.639613128 +0000 UTC m=+45.353192492" watchObservedRunningTime="2024-01-08 23:11:54.639813933 +0000 UTC m=+45.353393301"
	Jan 08 23:12:45 multinode-659947 kubelet[1594]: I0108 23:12:45.684137    1594 topology_manager.go:215] "Topology Admit Handler" podUID="98660fb1-73cf-4fd4-a4ff-42885e60215a" podNamespace="default" podName="busybox-5bc68d56bd-d8rhc"
	Jan 08 23:12:45 multinode-659947 kubelet[1594]: I0108 23:12:45.880979    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncsd8\" (UniqueName: \"kubernetes.io/projected/98660fb1-73cf-4fd4-a4ff-42885e60215a-kube-api-access-ncsd8\") pod \"busybox-5bc68d56bd-d8rhc\" (UID: \"98660fb1-73cf-4fd4-a4ff-42885e60215a\") " pod="default/busybox-5bc68d56bd-d8rhc"
	Jan 08 23:12:46 multinode-659947 kubelet[1594]: W0108 23:12:46.340276    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc/crio-0b60eed7c6d8b30c14cc2bb057a74a05717b71cb3862bd93c42d089e9a798b4d WatchSource:0}: Error finding container 0b60eed7c6d8b30c14cc2bb057a74a05717b71cb3862bd93c42d089e9a798b4d: Status 404 returned error can't find the container with id 0b60eed7c6d8b30c14cc2bb057a74a05717b71cb3862bd93c42d089e9a798b4d
	Jan 08 23:12:47 multinode-659947 kubelet[1594]: I0108 23:12:47.715608    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-d8rhc" podStartSLOduration=2.006651476 podCreationTimestamp="2024-01-08 23:12:45 +0000 UTC" firstStartedPulling="2024-01-08 23:12:46.343492036 +0000 UTC m=+97.057071391" lastFinishedPulling="2024-01-08 23:12:47.052391315 +0000 UTC m=+97.765970666" observedRunningTime="2024-01-08 23:12:47.715108066 +0000 UTC m=+98.428687429" watchObservedRunningTime="2024-01-08 23:12:47.715550751 +0000 UTC m=+98.429130115"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-659947 -n multinode-659947
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-659947 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.3370339141.exe start -p running-upgrade-605514 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.3370339141.exe start -p running-upgrade-605514 --memory=2200 --vm-driver=docker  --container-runtime=crio: (57.022332874s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-605514 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-605514 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.354825552s)

                                                
                                                
-- stdout --
	* [running-upgrade-605514] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-605514 in cluster running-upgrade-605514
	* Pulling base image v0.0.42-1704751654-17830 ...
	* Updating the running docker "running-upgrade-605514" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:26:04.114916  506324 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:26:04.115036  506324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:26:04.115046  506324 out.go:309] Setting ErrFile to fd 2...
	I0108 23:26:04.115051  506324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:26:04.115339  506324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	I0108 23:26:04.115888  506324 out.go:303] Setting JSON to false
	I0108 23:26:04.117427  506324 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14896,"bootTime":1704741468,"procs":619,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:26:04.117510  506324 start.go:138] virtualization: kvm guest
	I0108 23:26:04.119938  506324 out.go:177] * [running-upgrade-605514] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:26:04.121391  506324 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:26:04.121419  506324 notify.go:220] Checking for updates...
	I0108 23:26:04.122816  506324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:26:04.124227  506324 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:26:04.125734  506324 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	I0108 23:26:04.127292  506324 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:26:04.128731  506324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:26:04.130768  506324 config.go:182] Loaded profile config "running-upgrade-605514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 23:26:04.130810  506324 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0108 23:26:04.133262  506324 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 23:26:04.134732  506324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:26:04.159250  506324 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 23:26:04.159407  506324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:26:04.216935  506324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:71 SystemTime:2024-01-08 23:26:04.207988742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:26:04.217047  506324 docker.go:295] overlay module found
	I0108 23:26:04.219011  506324 out.go:177] * Using the docker driver based on existing profile
	I0108 23:26:04.220345  506324 start.go:298] selected driver: docker
	I0108 23:26:04.220362  506324 start.go:902] validating driver "docker" against &{Name:running-upgrade-605514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-605514 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 23:26:04.220452  506324 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:26:04.221272  506324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:26:04.275222  506324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:71 SystemTime:2024-01-08 23:26:04.266159035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:26:04.275580  506324 cni.go:84] Creating CNI manager for ""
	I0108 23:26:04.275601  506324 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 23:26:04.275610  506324 start_flags.go:323] config:
	{Name:running-upgrade-605514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-605514 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I0108 23:26:04.277747  506324 out.go:177] * Starting control plane node running-upgrade-605514 in cluster running-upgrade-605514
	I0108 23:26:04.279008  506324 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 23:26:04.280383  506324 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0108 23:26:04.281540  506324 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0108 23:26:04.281671  506324 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0108 23:26:04.303997  506324 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon, skipping pull
	I0108 23:26:04.304066  506324 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in daemon, skipping load
	W0108 23:26:04.304612  506324 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 23:26:04.304810  506324 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/running-upgrade-605514/config.json ...
	I0108 23:26:04.304804  506324 cache.go:107] acquiring lock: {Name:mk9457a2c5372718c08ae7f84ccfdc3732bf518a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:26:04.304815  506324 cache.go:107] acquiring lock: {Name:mk83523bf85d4732ee2f1368445f685aa5544e2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:26:04.304865  506324 cache.go:107] acquiring lock: {Name:mk213ea7d03258ed33cf20504889993fc4dbcf5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:26:04.304900  506324 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 23:26:04.304912  506324 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 122.152µs
	I0108 23:26:04.304927  506324 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 23:26:04.304906  506324 cache.go:107] acquiring lock: {Name:mk569c69694f4bb94b05696ecccc4f5144675d1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:26:04.304941  506324 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0108 23:26:04.304931  506324 cache.go:107] acquiring lock: {Name:mk1ac519c6de83377d0fb7d22f8808738579fda5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:26:04.304953  506324 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 89.192µs
	I0108 23:26:04.304960  506324 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0108 23:26:04.304964  506324 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0108 23:26:04.304823  506324 cache.go:107] acquiring lock: {Name:mk0d441ab511ff2aeee8fc071de52aadec82390d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:26:04.304968  506324 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 64.199µs
	I0108 23:26:04.304977  506324 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0108 23:26:04.304981  506324 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0108 23:26:04.304990  506324 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 60.342µs
	I0108 23:26:04.304994  506324 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0108 23:26:04.304999  506324 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0108 23:26:04.304993  506324 cache.go:107] acquiring lock: {Name:mk76ca98bb029a807f4f1a8f2da36558192aea2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:26:04.305002  506324 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 186.86µs
	I0108 23:26:04.305014  506324 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0108 23:26:04.304916  506324 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0108 23:26:04.305023  506324 cache.go:194] Successfully downloaded all kic artifacts
	I0108 23:26:04.305023  506324 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 221.823µs
	I0108 23:26:04.305029  506324 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 23:26:04.305034  506324 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0108 23:26:04.305027  506324 cache.go:107] acquiring lock: {Name:mk6b6fccd491890753c37ec8790b8828f323ce0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:26:04.305037  506324 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 46.901µs
	I0108 23:26:04.305045  506324 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 23:26:04.305046  506324 start.go:365] acquiring machines lock for running-upgrade-605514: {Name:mk68454eafe3e545c8f81181ed7089686198738d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:26:04.305070  506324 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0108 23:26:04.305077  506324 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 52.238µs
	I0108 23:26:04.305084  506324 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0108 23:26:04.305097  506324 cache.go:87] Successfully saved all images to host disk.
	I0108 23:26:04.305117  506324 start.go:369] acquired machines lock for "running-upgrade-605514" in 58.455µs
	I0108 23:26:04.305138  506324 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:26:04.305147  506324 fix.go:54] fixHost starting: m01
	I0108 23:26:04.305440  506324 cli_runner.go:164] Run: docker container inspect running-upgrade-605514 --format={{.State.Status}}
	I0108 23:26:04.328073  506324 fix.go:102] recreateIfNeeded on running-upgrade-605514: state=Running err=<nil>
	W0108 23:26:04.328110  506324 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:26:04.331211  506324 out.go:177] * Updating the running docker "running-upgrade-605514" container ...
	I0108 23:26:04.332890  506324 machine.go:88] provisioning docker machine ...
	I0108 23:26:04.332939  506324 ubuntu.go:169] provisioning hostname "running-upgrade-605514"
	I0108 23:26:04.333016  506324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-605514
	I0108 23:26:04.357626  506324 main.go:141] libmachine: Using SSH client type: native
	I0108 23:26:04.358158  506324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33276 <nil> <nil>}
	I0108 23:26:04.358179  506324 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-605514 && echo "running-upgrade-605514" | sudo tee /etc/hostname
	I0108 23:26:04.487950  506324 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-605514
	
	I0108 23:26:04.488049  506324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-605514
	I0108 23:26:04.505136  506324 main.go:141] libmachine: Using SSH client type: native
	I0108 23:26:04.505518  506324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33276 <nil> <nil>}
	I0108 23:26:04.505545  506324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-605514' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-605514/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-605514' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:26:04.611346  506324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:26:04.611380  506324 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-321683/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-321683/.minikube}
	I0108 23:26:04.611405  506324 ubuntu.go:177] setting up certificates
	I0108 23:26:04.611416  506324 provision.go:83] configureAuth start
	I0108 23:26:04.611470  506324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-605514
	I0108 23:26:04.629968  506324 provision.go:138] copyHostCerts
	I0108 23:26:04.630049  506324 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem, removing ...
	I0108 23:26:04.630064  506324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem
	I0108 23:26:04.630145  506324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem (1082 bytes)
	I0108 23:26:04.630317  506324 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem, removing ...
	I0108 23:26:04.630332  506324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem
	I0108 23:26:04.630380  506324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem (1123 bytes)
	I0108 23:26:04.630465  506324 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem, removing ...
	I0108 23:26:04.630477  506324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem
	I0108 23:26:04.630501  506324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem (1679 bytes)
	I0108 23:26:04.630550  506324 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-605514 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-605514]
	I0108 23:26:04.838199  506324 provision.go:172] copyRemoteCerts
	I0108 23:26:04.838278  506324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:26:04.838316  506324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-605514
	I0108 23:26:04.865454  506324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/running-upgrade-605514/id_rsa Username:docker}
	I0108 23:26:04.959062  506324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:26:04.986390  506324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 23:26:05.003809  506324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:26:05.020754  506324 provision.go:86] duration metric: configureAuth took 409.324167ms
	I0108 23:26:05.020789  506324 ubuntu.go:193] setting minikube options for container-runtime
	I0108 23:26:05.020975  506324 config.go:182] Loaded profile config "running-upgrade-605514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 23:26:05.021106  506324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-605514
	I0108 23:26:05.038704  506324 main.go:141] libmachine: Using SSH client type: native
	I0108 23:26:05.039047  506324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33276 <nil> <nil>}
	I0108 23:26:05.039066  506324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:26:05.497228  506324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:26:05.497282  506324 machine.go:91] provisioned docker machine in 1.164356544s
	I0108 23:26:05.497294  506324 start.go:300] post-start starting for "running-upgrade-605514" (driver="docker")
	I0108 23:26:05.497307  506324 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:26:05.497395  506324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:26:05.497458  506324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-605514
	I0108 23:26:05.513759  506324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/running-upgrade-605514/id_rsa Username:docker}
	I0108 23:26:05.594570  506324 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:26:05.597396  506324 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 23:26:05.597427  506324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 23:26:05.597436  506324 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 23:26:05.597444  506324 info.go:137] Remote host: Ubuntu 19.10
	I0108 23:26:05.597458  506324 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/addons for local assets ...
	I0108 23:26:05.597516  506324 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/files for local assets ...
	I0108 23:26:05.597602  506324 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem -> 3283842.pem in /etc/ssl/certs
	I0108 23:26:05.597693  506324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:26:05.604542  506324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem --> /etc/ssl/certs/3283842.pem (1708 bytes)
	I0108 23:26:05.620809  506324 start.go:303] post-start completed in 123.497574ms
	I0108 23:26:05.620930  506324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:26:05.620987  506324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-605514
	I0108 23:26:05.638752  506324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/running-upgrade-605514/id_rsa Username:docker}
	I0108 23:26:05.719966  506324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 23:26:05.724231  506324 fix.go:56] fixHost completed within 1.419077966s
	I0108 23:26:05.724264  506324 start.go:83] releasing machines lock for "running-upgrade-605514", held for 1.419131889s
	I0108 23:26:05.724345  506324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-605514
	I0108 23:26:05.740868  506324 ssh_runner.go:195] Run: cat /version.json
	I0108 23:26:05.740919  506324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-605514
	I0108 23:26:05.740958  506324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:26:05.741036  506324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-605514
	I0108 23:26:05.760388  506324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/running-upgrade-605514/id_rsa Username:docker}
	I0108 23:26:05.768435  506324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/running-upgrade-605514/id_rsa Username:docker}
	W0108 23:26:05.846640  506324 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 23:26:05.846735  506324 ssh_runner.go:195] Run: systemctl --version
	I0108 23:26:05.880564  506324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:26:05.929702  506324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:26:05.934046  506324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:26:05.954884  506324 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 23:26:05.954976  506324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:26:05.982011  506324 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 23:26:05.982041  506324 start.go:475] detecting cgroup driver to use...
	I0108 23:26:05.982076  506324 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 23:26:05.982126  506324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:26:06.003405  506324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:26:06.012415  506324 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:26:06.012469  506324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:26:06.021019  506324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:26:06.031413  506324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 23:26:06.041037  506324 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 23:26:06.041091  506324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:26:06.121428  506324 docker.go:219] disabling docker service ...
	I0108 23:26:06.121501  506324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:26:06.131139  506324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:26:06.140656  506324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:26:06.229610  506324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:26:06.360333  506324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:26:06.372583  506324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:26:06.391050  506324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 23:26:06.391109  506324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:26:06.401532  506324 out.go:177] 
	W0108 23:26:06.402868  506324 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 23:26:06.402888  506324 out.go:239] * 
	* 
	W0108 23:26:06.403876  506324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 23:26:06.405406  506324 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-605514 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-08 23:26:06.424126582 +0000 UTC m=+2067.168776336
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-605514
helpers_test.go:235: (dbg) docker inspect running-upgrade-605514:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28dc2c791f96ceb58a7a0b3187d2a85a1cf98514379f078461ab8f52b665cfaa",
	        "Created": "2024-01-08T23:25:07.551530506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496393,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T23:25:08.069674679Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/28dc2c791f96ceb58a7a0b3187d2a85a1cf98514379f078461ab8f52b665cfaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28dc2c791f96ceb58a7a0b3187d2a85a1cf98514379f078461ab8f52b665cfaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/28dc2c791f96ceb58a7a0b3187d2a85a1cf98514379f078461ab8f52b665cfaa/hosts",
	        "LogPath": "/var/lib/docker/containers/28dc2c791f96ceb58a7a0b3187d2a85a1cf98514379f078461ab8f52b665cfaa/28dc2c791f96ceb58a7a0b3187d2a85a1cf98514379f078461ab8f52b665cfaa-json.log",
	        "Name": "/running-upgrade-605514",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-605514:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9167af142d01cf6223091026090011273a297489a23d364a6e018ba83ef9123f-init/diff:/var/lib/docker/overlay2/08a98ae6186d027f7a17c54cbf0df3ba6f63695cc2f4f7fcef8d08fc34a9af01/diff:/var/lib/docker/overlay2/93e153aae020b048e2e72971b91d7393539a85e751f46c55ba3028e0168de101/diff:/var/lib/docker/overlay2/eb9afbf2c6a8f38f77ce8d36abf21bbd3837ea5ae4d24a01ca96dc0d21c845aa/diff:/var/lib/docker/overlay2/c38d0d776fda40438d6bf59c2fe9ddf56686b8249b7df2a0d24fc7737f5decc8/diff:/var/lib/docker/overlay2/eb714102658fc0b18c8984bf3766da1d8c5e3320bcb1df97582c7277233a0a6a/diff:/var/lib/docker/overlay2/5f2f3b789dddbe45265dd22c31573078cdaf86be9ac71418bff338bb3518445a/diff:/var/lib/docker/overlay2/8f16fa4ad6e37ea3cf274794e40e01a3be8825e24f0b49d58d47d783eb8f2313/diff:/var/lib/docker/overlay2/6d1cfb8bba3f39224202d6721a38db913cb76f43cac6cf47161af1b884516fb4/diff:/var/lib/docker/overlay2/81912e7da52f931bf5c9e521b17e86736cd88a585d68533e7d0f4f264ce61375/diff:/var/lib/docker/overlay2/7bf820
2cd5b6cef211b7d478bb3ee6e71c06fcc4c690657eae820cd7fd6196e0/diff:/var/lib/docker/overlay2/55b687778c4bb2d484c468db1967422511ccad13e41e74f386f66734be44e131/diff:/var/lib/docker/overlay2/dd65c31ef1e0200f7bbe5f6144d04fa22b4f018f8129f2f3b98686299ff4ce01/diff:/var/lib/docker/overlay2/a9aedea222629ee17cbe4f7bb143e9e04f7729cb7cdab356cf3b57d90d3fde75/diff:/var/lib/docker/overlay2/3fc5074577e061f176ca2584bd601adac5431f84ee1d3c294c640b5e62d65bf7/diff:/var/lib/docker/overlay2/cd011392675cff053d596f8b8d565d33b25c7728971b38430f8c4b11cbb9e55c/diff:/var/lib/docker/overlay2/48d489fb1d7e8cad753dcc6c4d5d92349969227b6aa569355ecfe0436366d583/diff:/var/lib/docker/overlay2/1bd83f28e700794cdbd592262f5574ebbf188f2670f958d1f4288b1d3d213266/diff:/var/lib/docker/overlay2/74b9f1628d5a91bd50e2420e1531d29efd39bc68491bebb88d984fa986ad1bb7/diff:/var/lib/docker/overlay2/602494948f3e00eedf6082e40ac2c9e113be7190617c24d52a45b662bebd8170/diff:/var/lib/docker/overlay2/d411e151ff8ee6c7e607a2ecd490ac92fb6637f0f7512f3d29ede4a2d3d816a3/diff:/var/lib/d
ocker/overlay2/aeda74860f1e9692928f461516f7a62af6f4c8f8d4f8637e38f5ff3a9fb9d571/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9167af142d01cf6223091026090011273a297489a23d364a6e018ba83ef9123f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9167af142d01cf6223091026090011273a297489a23d364a6e018ba83ef9123f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9167af142d01cf6223091026090011273a297489a23d364a6e018ba83ef9123f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-605514",
	                "Source": "/var/lib/docker/volumes/running-upgrade-605514/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-605514",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-605514",
	                "name.minikube.sigs.k8s.io": "running-upgrade-605514",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f8e12c8be4228581e4913ae8c43ccef2ba1e6396a65a86d74b789f4bcc8b9c9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33276"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33275"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33274"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6f8e12c8be42",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "dec08572f6a0c0b320bff4b4385b138585d45c9d3797c2b257f9c937f8b61d39",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "546f8221d3e7c27d9cf030783bfb09debb42102c2ebad51419c6ac60bc5f6a6d",
	                    "EndpointID": "dec08572f6a0c0b320bff4b4385b138585d45c9d3797c2b257f9c937f8b61d39",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-605514 -n running-upgrade-605514
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-605514 -n running-upgrade-605514: exit status 4 (374.226351ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 23:26:06.783614  507407 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-605514" does not appear in /home/jenkins/minikube-integration/17830-321683/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-605514" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-605514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-605514
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-605514: (1.822199129s)
--- FAIL: TestRunningBinaryUpgrade (61.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.2635871394.exe start -p stopped-upgrade-874472 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.2635871394.exe start -p stopped-upgrade-874472 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m7.147648285s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.2635871394.exe -p stopped-upgrade-874472 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.2635871394.exe -p stopped-upgrade-874472 stop: (10.784999515s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-874472 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-874472 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.856557452s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-874472] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-874472 in cluster stopped-upgrade-874472
	* Pulling base image v0.0.42-1704751654-17830 ...
	* Restarting existing docker container for "stopped-upgrade-874472" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:24:59.109341  493811 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:24:59.109619  493811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:24:59.109630  493811 out.go:309] Setting ErrFile to fd 2...
	I0108 23:24:59.109635  493811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:24:59.109834  493811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	I0108 23:24:59.110407  493811 out.go:303] Setting JSON to false
	I0108 23:24:59.111879  493811 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14831,"bootTime":1704741468,"procs":482,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:24:59.111947  493811 start.go:138] virtualization: kvm guest
	I0108 23:24:59.114232  493811 out.go:177] * [stopped-upgrade-874472] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:24:59.116195  493811 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:24:59.116220  493811 notify.go:220] Checking for updates...
	I0108 23:24:59.117862  493811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:24:59.119583  493811 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:24:59.121133  493811 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	I0108 23:24:59.122508  493811 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:24:59.124121  493811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:24:59.126142  493811 config.go:182] Loaded profile config "stopped-upgrade-874472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 23:24:59.126169  493811 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0108 23:24:59.128061  493811 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 23:24:59.129396  493811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:24:59.151589  493811 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 23:24:59.151768  493811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:24:59.205709  493811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:65 SystemTime:2024-01-08 23:24:59.196610268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:24:59.205874  493811 docker.go:295] overlay module found
	I0108 23:24:59.207881  493811 out.go:177] * Using the docker driver based on existing profile
	I0108 23:24:59.209247  493811 start.go:298] selected driver: docker
	I0108 23:24:59.209260  493811 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-874472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-874472 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 23:24:59.209338  493811 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:24:59.210107  493811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:24:59.261359  493811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:65 SystemTime:2024-01-08 23:24:59.252352965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:24:59.261854  493811 cni.go:84] Creating CNI manager for ""
	I0108 23:24:59.261888  493811 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 23:24:59.261941  493811 start_flags.go:323] config:
	{Name:stopped-upgrade-874472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-874472 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I0108 23:24:59.264468  493811 out.go:177] * Starting control plane node stopped-upgrade-874472 in cluster stopped-upgrade-874472
	I0108 23:24:59.265878  493811 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 23:24:59.267362  493811 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0108 23:24:59.268712  493811 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0108 23:24:59.268804  493811 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0108 23:24:59.285412  493811 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon, skipping pull
	I0108 23:24:59.285439  493811 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in daemon, skipping load
	W0108 23:24:59.288409  493811 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 23:24:59.288553  493811 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/stopped-upgrade-874472/config.json ...
	I0108 23:24:59.288641  493811 cache.go:107] acquiring lock: {Name:mk9457a2c5372718c08ae7f84ccfdc3732bf518a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:24:59.288641  493811 cache.go:107] acquiring lock: {Name:mk1ac519c6de83377d0fb7d22f8808738579fda5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:24:59.288689  493811 cache.go:107] acquiring lock: {Name:mk0d441ab511ff2aeee8fc071de52aadec82390d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:24:59.288750  493811 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0108 23:24:59.288745  493811 cache.go:107] acquiring lock: {Name:mk83523bf85d4732ee2f1368445f685aa5544e2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:24:59.288761  493811 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 137.781µs
	I0108 23:24:59.288774  493811 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0108 23:24:59.288749  493811 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 23:24:59.288764  493811 cache.go:107] acquiring lock: {Name:mk76ca98bb029a807f4f1a8f2da36558192aea2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:24:59.288785  493811 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 148.708µs
	I0108 23:24:59.288794  493811 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 23:24:59.288801  493811 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0108 23:24:59.288802  493811 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0108 23:24:59.288809  493811 cache.go:194] Successfully downloaded all kic artifacts
	I0108 23:24:59.288815  493811 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 23:24:59.288813  493811 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 70.182µs
	I0108 23:24:59.288827  493811 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0108 23:24:59.288824  493811 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 61.796µs
	I0108 23:24:59.288813  493811 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 133.009µs
	I0108 23:24:59.288836  493811 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 23:24:59.288674  493811 cache.go:107] acquiring lock: {Name:mk569c69694f4bb94b05696ecccc4f5144675d1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:24:59.288843  493811 start.go:365] acquiring machines lock for stopped-upgrade-874472: {Name:mke31983a21c8bcd963952ee6732daf3df7a3087 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:24:59.288843  493811 cache.go:107] acquiring lock: {Name:mk6b6fccd491890753c37ec8790b8828f323ce0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:24:59.288841  493811 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0108 23:24:59.288866  493811 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0108 23:24:59.288874  493811 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 225.32µs
	I0108 23:24:59.288881  493811 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0108 23:24:59.288873  493811 cache.go:107] acquiring lock: {Name:mk213ea7d03258ed33cf20504889993fc4dbcf5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:24:59.288941  493811 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0108 23:24:59.288951  493811 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 110.892µs
	I0108 23:24:59.288957  493811 cache.go:115] /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0108 23:24:59.288963  493811 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0108 23:24:59.288968  493811 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 118.479µs
	I0108 23:24:59.288974  493811 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0108 23:24:59.288975  493811 start.go:369] acquired machines lock for "stopped-upgrade-874472" in 112.895µs
	I0108 23:24:59.289018  493811 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:24:59.289029  493811 fix.go:54] fixHost starting: m01
	I0108 23:24:59.288981  493811 cache.go:87] Successfully saved all images to host disk.
	I0108 23:24:59.289400  493811 cli_runner.go:164] Run: docker container inspect stopped-upgrade-874472 --format={{.State.Status}}
	I0108 23:24:59.307941  493811 fix.go:102] recreateIfNeeded on stopped-upgrade-874472: state=Stopped err=<nil>
	W0108 23:24:59.307995  493811 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:24:59.310353  493811 out.go:177] * Restarting existing docker container for "stopped-upgrade-874472" ...
	I0108 23:24:59.311772  493811 cli_runner.go:164] Run: docker start stopped-upgrade-874472
	I0108 23:24:59.585025  493811 cli_runner.go:164] Run: docker container inspect stopped-upgrade-874472 --format={{.State.Status}}
	I0108 23:24:59.603590  493811 kic.go:430] container "stopped-upgrade-874472" state is running.
	I0108 23:24:59.603968  493811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-874472
	I0108 23:24:59.621356  493811 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/stopped-upgrade-874472/config.json ...
	I0108 23:24:59.621574  493811 machine.go:88] provisioning docker machine ...
	I0108 23:24:59.621596  493811 ubuntu.go:169] provisioning hostname "stopped-upgrade-874472"
	I0108 23:24:59.621645  493811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-874472
	I0108 23:24:59.639547  493811 main.go:141] libmachine: Using SSH client type: native
	I0108 23:24:59.639985  493811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33273 <nil> <nil>}
	I0108 23:24:59.640002  493811 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-874472 && echo "stopped-upgrade-874472" | sudo tee /etc/hostname
	I0108 23:24:59.640579  493811 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34062->127.0.0.1:33273: read: connection reset by peer
	I0108 23:25:02.761909  493811 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-874472
	
	I0108 23:25:02.762022  493811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-874472
	I0108 23:25:02.779726  493811 main.go:141] libmachine: Using SSH client type: native
	I0108 23:25:02.780121  493811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33273 <nil> <nil>}
	I0108 23:25:02.780152  493811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-874472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-874472/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-874472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:25:02.891453  493811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:25:02.891488  493811 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-321683/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-321683/.minikube}
	I0108 23:25:02.891553  493811 ubuntu.go:177] setting up certificates
	I0108 23:25:02.891570  493811 provision.go:83] configureAuth start
	I0108 23:25:02.891641  493811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-874472
	I0108 23:25:02.913859  493811 provision.go:138] copyHostCerts
	I0108 23:25:02.913954  493811 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem, removing ...
	I0108 23:25:02.913981  493811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem
	I0108 23:25:02.914080  493811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/ca.pem (1082 bytes)
	I0108 23:25:02.914212  493811 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem, removing ...
	I0108 23:25:02.914226  493811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem
	I0108 23:25:02.914262  493811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/cert.pem (1123 bytes)
	I0108 23:25:02.914341  493811 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem, removing ...
	I0108 23:25:02.914352  493811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem
	I0108 23:25:02.914380  493811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-321683/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-321683/.minikube/key.pem (1679 bytes)
	I0108 23:25:02.914451  493811 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-874472 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-874472]
	I0108 23:25:03.231036  493811 provision.go:172] copyRemoteCerts
	I0108 23:25:03.231103  493811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:25:03.231154  493811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-874472
	I0108 23:25:03.253151  493811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/stopped-upgrade-874472/id_rsa Username:docker}
	I0108 23:25:03.338607  493811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:25:03.360316  493811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 23:25:03.385108  493811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:25:03.405247  493811 provision.go:86] duration metric: configureAuth took 513.655209ms
	I0108 23:25:03.405288  493811 ubuntu.go:193] setting minikube options for container-runtime
	I0108 23:25:03.405559  493811 config.go:182] Loaded profile config "stopped-upgrade-874472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 23:25:03.405701  493811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-874472
	I0108 23:25:03.424974  493811 main.go:141] libmachine: Using SSH client type: native
	I0108 23:25:03.425330  493811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 33273 <nil> <nil>}
	I0108 23:25:03.425350  493811 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:25:04.027066  493811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:25:04.027095  493811 machine.go:91] provisioned docker machine in 4.405505575s
	I0108 23:25:04.027111  493811 start.go:300] post-start starting for "stopped-upgrade-874472" (driver="docker")
	I0108 23:25:04.027121  493811 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:25:04.027171  493811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:25:04.027210  493811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-874472
	I0108 23:25:04.045845  493811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/stopped-upgrade-874472/id_rsa Username:docker}
	I0108 23:25:04.131130  493811 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:25:04.134543  493811 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 23:25:04.134567  493811 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 23:25:04.134576  493811 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 23:25:04.134583  493811 info.go:137] Remote host: Ubuntu 19.10
	I0108 23:25:04.134594  493811 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/addons for local assets ...
	I0108 23:25:04.134655  493811 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-321683/.minikube/files for local assets ...
	I0108 23:25:04.134737  493811 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem -> 3283842.pem in /etc/ssl/certs
	I0108 23:25:04.134854  493811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:25:04.142290  493811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/ssl/certs/3283842.pem --> /etc/ssl/certs/3283842.pem (1708 bytes)
	I0108 23:25:04.162174  493811 start.go:303] post-start completed in 135.044628ms
	I0108 23:25:04.162261  493811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:25:04.162304  493811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-874472
	I0108 23:25:04.182257  493811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/stopped-upgrade-874472/id_rsa Username:docker}
	I0108 23:25:04.268068  493811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 23:25:04.272154  493811 fix.go:56] fixHost completed within 4.983114508s
	I0108 23:25:04.272184  493811 start.go:83] releasing machines lock for "stopped-upgrade-874472", held for 4.983188627s
	I0108 23:25:04.272262  493811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-874472
	I0108 23:25:04.296064  493811 ssh_runner.go:195] Run: cat /version.json
	I0108 23:25:04.296121  493811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:25:04.296126  493811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-874472
	I0108 23:25:04.296175  493811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-874472
	I0108 23:25:04.316968  493811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/stopped-upgrade-874472/id_rsa Username:docker}
	I0108 23:25:04.317137  493811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/stopped-upgrade-874472/id_rsa Username:docker}
	W0108 23:25:04.429247  493811 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 23:25:04.429340  493811 ssh_runner.go:195] Run: systemctl --version
	I0108 23:25:04.433907  493811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:25:04.486251  493811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:25:04.490482  493811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:25:04.506705  493811 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 23:25:04.506808  493811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:25:04.531558  493811 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 23:25:04.531592  493811 start.go:475] detecting cgroup driver to use...
	I0108 23:25:04.531630  493811 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 23:25:04.531704  493811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:25:04.555578  493811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:25:04.565578  493811 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:25:04.565638  493811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:25:04.575168  493811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:25:04.585692  493811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 23:25:04.595075  493811 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 23:25:04.595159  493811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:25:04.692520  493811 docker.go:219] disabling docker service ...
	I0108 23:25:04.692603  493811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:25:04.708123  493811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:25:04.720103  493811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:25:04.792781  493811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:25:04.859488  493811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:25:04.869769  493811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:25:04.885177  493811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 23:25:04.885245  493811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:25:04.896487  493811 out.go:177] 
	W0108 23:25:04.898213  493811 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 23:25:04.898237  493811 out.go:239] * 
	* 
	W0108 23:25:04.899297  493811 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 23:25:04.900817  493811 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-874472 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (83.79s)

                                                
                                    

Test pass (283/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.77
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 4.94
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.2/json-events 7.7
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.22
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
25 TestDownloadOnlyKic 1.3
26 TestBinaryMirror 0.75
27 TestOffline 87.64
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
32 TestAddons/Setup 134.92
34 TestAddons/parallel/Registry 15.55
36 TestAddons/parallel/InspektorGadget 11.82
37 TestAddons/parallel/MetricsServer 5.67
38 TestAddons/parallel/HelmTiller 11.13
40 TestAddons/parallel/CSI 95.58
41 TestAddons/parallel/Headlamp 12.26
42 TestAddons/parallel/CloudSpanner 5.51
43 TestAddons/parallel/LocalPath 56.08
44 TestAddons/parallel/NvidiaDevicePlugin 5.47
45 TestAddons/parallel/Yakd 6.07
48 TestAddons/serial/GCPAuth/Namespaces 0.12
49 TestAddons/StoppedEnableDisable 12.21
50 TestCertOptions 27.79
51 TestCertExpiration 224.92
53 TestForceSystemdFlag 28.21
54 TestForceSystemdEnv 43.75
56 TestKVMDriverInstallOrUpdate 1.49
60 TestErrorSpam/setup 24.66
61 TestErrorSpam/start 0.65
62 TestErrorSpam/status 0.92
63 TestErrorSpam/pause 1.55
64 TestErrorSpam/unpause 1.61
65 TestErrorSpam/stop 1.42
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 70.07
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 36.29
72 TestFunctional/serial/KubeContext 0.05
73 TestFunctional/serial/KubectlGetPods 0.06
76 TestFunctional/serial/CacheCmd/cache/add_remote 2.77
77 TestFunctional/serial/CacheCmd/cache/add_local 4.88
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
79 TestFunctional/serial/CacheCmd/cache/list 0.07
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
82 TestFunctional/serial/CacheCmd/cache/delete 0.13
83 TestFunctional/serial/MinikubeKubectlCmd 0.13
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
85 TestFunctional/serial/ExtraConfig 29.73
86 TestFunctional/serial/ComponentHealth 0.07
87 TestFunctional/serial/LogsCmd 1.38
88 TestFunctional/serial/LogsFileCmd 1.4
89 TestFunctional/serial/InvalidService 3.88
91 TestFunctional/parallel/ConfigCmd 0.44
92 TestFunctional/parallel/DashboardCmd 10.08
93 TestFunctional/parallel/DryRun 0.48
94 TestFunctional/parallel/InternationalLanguage 0.22
95 TestFunctional/parallel/StatusCmd 1
99 TestFunctional/parallel/ServiceCmdConnect 10.71
100 TestFunctional/parallel/AddonsCmd 0.16
101 TestFunctional/parallel/PersistentVolumeClaim 34.82
103 TestFunctional/parallel/SSHCmd 0.9
104 TestFunctional/parallel/CpCmd 2.59
105 TestFunctional/parallel/MySQL 22.8
106 TestFunctional/parallel/FileSync 0.33
107 TestFunctional/parallel/CertSync 2
111 TestFunctional/parallel/NodeLabels 0.08
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
115 TestFunctional/parallel/License 0.26
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.45
121 TestFunctional/parallel/ImageCommands/Setup 1.07
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.83
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.37
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.88
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.71
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/Version/short 0.07
140 TestFunctional/parallel/Version/components 0.53
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.78
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.39
144 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.48
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
147 TestFunctional/parallel/ProfileCmd/profile_list 0.37
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
149 TestFunctional/parallel/MountCmd/any-port 5.78
150 TestFunctional/parallel/MountCmd/specific-port 2.19
151 TestFunctional/parallel/ServiceCmd/List 0.98
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.84
153 TestFunctional/parallel/MountCmd/VerifyCleanup 2.25
154 TestFunctional/parallel/ServiceCmd/HTTPS 0.59
155 TestFunctional/parallel/ServiceCmd/Format 0.69
156 TestFunctional/parallel/ServiceCmd/URL 0.68
157 TestFunctional/delete_addon-resizer_images 0.07
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestIngressAddonLegacy/StartLegacyK8sCluster 65.74
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.37
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.6
170 TestJSONOutput/start/Command 69.27
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.68
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.61
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 5.79
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.24
195 TestKicCustomNetwork/create_custom_network 31.82
196 TestKicCustomNetwork/use_default_bridge_network 25.58
197 TestKicExistingNetwork 24.94
198 TestKicCustomSubnet 28.26
199 TestKicStaticIP 24.72
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 54.62
204 TestMountStart/serial/StartWithMountFirst 8.21
205 TestMountStart/serial/VerifyMountFirst 0.28
206 TestMountStart/serial/StartWithMountSecond 8.13
207 TestMountStart/serial/VerifyMountSecond 0.27
208 TestMountStart/serial/DeleteFirst 1.64
209 TestMountStart/serial/VerifyMountPostDelete 0.27
210 TestMountStart/serial/Stop 1.23
211 TestMountStart/serial/RestartStopped 7.02
212 TestMountStart/serial/VerifyMountPostStop 0.27
215 TestMultiNode/serial/FreshStart2Nodes 118.29
216 TestMultiNode/serial/DeployApp2Nodes 3.74
218 TestMultiNode/serial/AddNode 19.52
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.29
221 TestMultiNode/serial/CopyFile 9.81
222 TestMultiNode/serial/StopNode 2.16
223 TestMultiNode/serial/StartAfterStop 10.79
224 TestMultiNode/serial/RestartKeepsNodes 117.12
225 TestMultiNode/serial/DeleteNode 4.74
226 TestMultiNode/serial/StopMultiNode 23.86
227 TestMultiNode/serial/RestartMultiNode 79.05
228 TestMultiNode/serial/ValidateNameConflict 26.24
233 TestPreload 150.28
235 TestScheduledStopUnix 97.53
238 TestInsufficientStorage 13.23
241 TestKubernetesUpgrade 343.89
242 TestMissingContainerUpgrade 145.39
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
245 TestNoKubernetes/serial/StartWithK8s 40.2
246 TestNoKubernetes/serial/StartWithStopK8s 9.86
247 TestNoKubernetes/serial/Start 6.06
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
249 TestNoKubernetes/serial/ProfileList 1.46
250 TestNoKubernetes/serial/Stop 1.26
251 TestNoKubernetes/serial/StartNoArgs 7.46
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
253 TestStoppedBinaryUpgrade/Setup 0.34
255 TestStoppedBinaryUpgrade/MinikubeLogs 0.54
257 TestPause/serial/Start 49.62
258 TestPause/serial/SecondStartNoReconfiguration 38.26
266 TestPause/serial/Pause 0.78
267 TestPause/serial/VerifyStatus 0.35
271 TestPause/serial/Unpause 0.76
272 TestPause/serial/PauseAgain 0.88
273 TestPause/serial/DeletePaused 2.91
278 TestNetworkPlugins/group/false 6.53
279 TestPause/serial/VerifyDeletedResources 0.53
281 TestStartStop/group/old-k8s-version/serial/FirstStart 131.48
283 TestStartStop/group/no-preload/serial/FirstStart 70.39
288 TestStartStop/group/embed-certs/serial/FirstStart 72.6
289 TestStartStop/group/no-preload/serial/DeployApp 8.26
290 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.92
291 TestStartStop/group/no-preload/serial/Stop 11.97
292 TestStartStop/group/embed-certs/serial/DeployApp 8.28
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
294 TestStartStop/group/embed-certs/serial/Stop 12.01
295 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
296 TestStartStop/group/no-preload/serial/SecondStart 337.99
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
298 TestStartStop/group/embed-certs/serial/SecondStart 341.98
299 TestStartStop/group/old-k8s-version/serial/DeployApp 9.39
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
301 TestStartStop/group/old-k8s-version/serial/Stop 12.02
303 TestStartStop/group/newest-cni/serial/FirstStart 34.44
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
305 TestStartStop/group/old-k8s-version/serial/SecondStart 427.11
306 TestStartStop/group/newest-cni/serial/DeployApp 0
307 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
308 TestStartStop/group/newest-cni/serial/Stop 3.65
309 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
310 TestStartStop/group/newest-cni/serial/SecondStart 26.32
311 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
312 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
313 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
314 TestStartStop/group/newest-cni/serial/Pause 2.78
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 37.79
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 345.33
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
326 TestStartStop/group/no-preload/serial/Pause 3.09
327 TestNetworkPlugins/group/auto/Start 70.11
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
330 TestStartStop/group/embed-certs/serial/Pause 2.89
331 TestNetworkPlugins/group/kindnet/Start 69.95
332 TestNetworkPlugins/group/auto/KubeletFlags 0.33
333 TestNetworkPlugins/group/auto/NetCatPod 9.21
334 TestNetworkPlugins/group/auto/DNS 0.15
335 TestNetworkPlugins/group/auto/Localhost 0.13
336 TestNetworkPlugins/group/auto/HairPin 0.14
337 TestNetworkPlugins/group/kindnet/ControllerPod 6.03
338 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
339 TestNetworkPlugins/group/kindnet/NetCatPod 9.27
340 TestNetworkPlugins/group/kindnet/DNS 0.16
341 TestNetworkPlugins/group/kindnet/Localhost 0.15
342 TestNetworkPlugins/group/kindnet/HairPin 0.14
343 TestNetworkPlugins/group/calico/Start 62.26
344 TestNetworkPlugins/group/custom-flannel/Start 54.56
345 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
347 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
348 TestStartStop/group/old-k8s-version/serial/Pause 3.57
349 TestNetworkPlugins/group/enable-default-cni/Start 82.18
350 TestNetworkPlugins/group/calico/ControllerPod 6.01
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
352 TestNetworkPlugins/group/calico/KubeletFlags 0.31
353 TestNetworkPlugins/group/calico/NetCatPod 10.2
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
355 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
356 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.19
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.08
359 TestNetworkPlugins/group/calico/DNS 0.18
360 TestNetworkPlugins/group/calico/Localhost 0.16
361 TestNetworkPlugins/group/calico/HairPin 0.15
362 TestNetworkPlugins/group/flannel/Start 54.27
363 TestNetworkPlugins/group/custom-flannel/DNS 0.26
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
366 TestNetworkPlugins/group/bridge/Start 79.89
367 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
368 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.19
369 TestNetworkPlugins/group/flannel/ControllerPod 6.01
370 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
371 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
372 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
374 TestNetworkPlugins/group/flannel/NetCatPod 9.19
375 TestNetworkPlugins/group/flannel/DNS 0.16
376 TestNetworkPlugins/group/flannel/Localhost 0.14
377 TestNetworkPlugins/group/flannel/HairPin 0.14
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
379 TestNetworkPlugins/group/bridge/NetCatPod 10.18
380 TestNetworkPlugins/group/bridge/DNS 0.15
381 TestNetworkPlugins/group/bridge/Localhost 0.13
382 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.16.0/json-events (7.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-926847 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-926847 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.770919747s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-926847
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-926847: exit status 85 (84.708377ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-926847 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |          |
	|         | -p download-only-926847        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:51:39
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:51:39.384589  328396 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:51:39.384903  328396 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:51:39.384914  328396 out.go:309] Setting ErrFile to fd 2...
	I0108 22:51:39.384919  328396 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:51:39.385177  328396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	W0108 22:51:39.385344  328396 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-321683/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-321683/.minikube/config/config.json: no such file or directory
	I0108 22:51:39.386193  328396 out.go:303] Setting JSON to true
	I0108 22:51:39.387326  328396 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12831,"bootTime":1704741468,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:51:39.387428  328396 start.go:138] virtualization: kvm guest
	I0108 22:51:39.390395  328396 out.go:97] [download-only-926847] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:51:39.392144  328396 out.go:169] MINIKUBE_LOCATION=17830
	W0108 22:51:39.390547  328396 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 22:51:39.390657  328396 notify.go:220] Checking for updates...
	I0108 22:51:39.395633  328396 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:51:39.397396  328396 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 22:51:39.399066  328396 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	I0108 22:51:39.400710  328396 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 22:51:39.403556  328396 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:51:39.403922  328396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:51:39.429053  328396 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 22:51:39.429244  328396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:51:39.481214  328396 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-08 22:51:39.472496887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 22:51:39.481357  328396 docker.go:295] overlay module found
	I0108 22:51:39.483422  328396 out.go:97] Using the docker driver based on user configuration
	I0108 22:51:39.483460  328396 start.go:298] selected driver: docker
	I0108 22:51:39.483468  328396 start.go:902] validating driver "docker" against <nil>
	I0108 22:51:39.483700  328396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:51:39.538651  328396 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-08 22:51:39.53000155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 22:51:39.538811  328396 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 22:51:39.539417  328396 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0108 22:51:39.539594  328396 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 22:51:39.541945  328396 out.go:169] Using Docker driver with root privileges
	I0108 22:51:39.543597  328396 cni.go:84] Creating CNI manager for ""
	I0108 22:51:39.543628  328396 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:51:39.543647  328396 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 22:51:39.543660  328396 start_flags.go:323] config:
	{Name:download-only-926847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-926847 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:51:39.545458  328396 out.go:97] Starting control plane node download-only-926847 in cluster download-only-926847
	I0108 22:51:39.545490  328396 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 22:51:39.547040  328396 out.go:97] Pulling base image v0.0.42-1704751654-17830 ...
	I0108 22:51:39.547078  328396 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 22:51:39.547196  328396 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0108 22:51:39.563443  328396 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 to local cache
	I0108 22:51:39.563640  328396 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory
	I0108 22:51:39.563749  328396 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 to local cache
	I0108 22:51:39.566869  328396 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0108 22:51:39.566892  328396 cache.go:56] Caching tarball of preloaded images
	I0108 22:51:39.567025  328396 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 22:51:39.569525  328396 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 22:51:39.569580  328396 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:51:39.591884  328396 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0108 22:51:42.629624  328396 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 as a tarball
	I0108 22:51:43.524195  328396 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:51:43.524302  328396 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-926847"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-926847 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-926847 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.934825123s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-926847
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-926847: exit status 85 (81.602719ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-926847 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |          |
	|         | -p download-only-926847        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-926847 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |          |
	|         | -p download-only-926847        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:51:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:51:47.241340  328528 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:51:47.241646  328528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:51:47.241657  328528 out.go:309] Setting ErrFile to fd 2...
	I0108 22:51:47.241662  328528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:51:47.241877  328528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	W0108 22:51:47.241999  328528 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-321683/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-321683/.minikube/config/config.json: no such file or directory
	I0108 22:51:47.242434  328528 out.go:303] Setting JSON to true
	I0108 22:51:47.243385  328528 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12839,"bootTime":1704741468,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:51:47.243455  328528 start.go:138] virtualization: kvm guest
	I0108 22:51:47.245752  328528 out.go:97] [download-only-926847] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:51:47.247499  328528 out.go:169] MINIKUBE_LOCATION=17830
	I0108 22:51:47.245955  328528 notify.go:220] Checking for updates...
	I0108 22:51:47.250684  328528 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:51:47.252236  328528 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 22:51:47.253671  328528 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	I0108 22:51:47.255523  328528 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-926847"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (7.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-926847 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-926847 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.696311961s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (7.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-926847
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-926847: exit status 85 (80.022186ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-926847 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |          |
	|         | -p download-only-926847           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-926847 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |          |
	|         | -p download-only-926847           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-926847 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |          |
	|         | -p download-only-926847           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:51:52
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:51:52.256622  328662 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:51:52.256786  328662 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:51:52.256797  328662 out.go:309] Setting ErrFile to fd 2...
	I0108 22:51:52.256805  328662 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:51:52.257037  328662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	W0108 22:51:52.257182  328662 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-321683/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-321683/.minikube/config/config.json: no such file or directory
	I0108 22:51:52.257645  328662 out.go:303] Setting JSON to true
	I0108 22:51:52.258652  328662 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12844,"bootTime":1704741468,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:51:52.258717  328662 start.go:138] virtualization: kvm guest
	I0108 22:51:52.261134  328662 out.go:97] [download-only-926847] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:51:52.263036  328662 out.go:169] MINIKUBE_LOCATION=17830
	I0108 22:51:52.261376  328662 notify.go:220] Checking for updates...
	I0108 22:51:52.266356  328662 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:51:52.267964  328662 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 22:51:52.269452  328662 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	I0108 22:51:52.270899  328662 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 22:51:52.273788  328662 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:51:52.274303  328662 config.go:182] Loaded profile config "download-only-926847": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0108 22:51:52.274352  328662 start.go:810] api.Load failed for download-only-926847: filestore "download-only-926847": Docker machine "download-only-926847" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:51:52.274427  328662 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 22:51:52.274467  328662 start.go:810] api.Load failed for download-only-926847: filestore "download-only-926847": Docker machine "download-only-926847" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:51:52.296372  328662 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 22:51:52.296496  328662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:51:52.349936  328662 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 22:51:52.340574651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 22:51:52.350077  328662 docker.go:295] overlay module found
	I0108 22:51:52.352326  328662 out.go:97] Using the docker driver based on existing profile
	I0108 22:51:52.352359  328662 start.go:298] selected driver: docker
	I0108 22:51:52.352367  328662 start.go:902] validating driver "docker" against &{Name:download-only-926847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-926847 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:51:52.352566  328662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:51:52.404908  328662 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 22:51:52.395881788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 22:51:52.405577  328662 cni.go:84] Creating CNI manager for ""
	I0108 22:51:52.405597  328662 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:51:52.405630  328662 start_flags.go:323] config:
	{Name:download-only-926847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-926847 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I0108 22:51:52.407767  328662 out.go:97] Starting control plane node download-only-926847 in cluster download-only-926847
	I0108 22:51:52.407797  328662 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 22:51:52.409308  328662 out.go:97] Pulling base image v0.0.42-1704751654-17830 ...
	I0108 22:51:52.409334  328662 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:51:52.409393  328662 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0108 22:51:52.425118  328662 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 to local cache
	I0108 22:51:52.425240  328662 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory
	I0108 22:51:52.425259  328662 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory, skipping pull
	I0108 22:51:52.425263  328662 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in cache, skipping pull
	I0108 22:51:52.425275  328662 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 as a tarball
	I0108 22:51:52.432146  328662 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 22:51:52.432171  328662 cache.go:56] Caching tarball of preloaded images
	I0108 22:51:52.432295  328662 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:51:52.434407  328662 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 22:51:52.434435  328662 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:51:52.460763  328662 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:2e182f4d7475b49e22eaf15ea22c281b -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 22:51:55.137517  328662 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:51:55.137629  328662 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-321683/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:51:55.943038  328662 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0108 22:51:55.943213  328662 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/download-only-926847/config.json ...
	I0108 22:51:55.943503  328662 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:51:55.943727  328662 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17830-321683/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-926847"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-926847
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.3s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-817040 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-817040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-817040
--- PASS: TestDownloadOnlyKic (1.30s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-114815 --alsologtostderr --binary-mirror http://127.0.0.1:33677 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-114815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-114815
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (87.64s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-714233 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-714233 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m23.071268945s)
helpers_test.go:175: Cleaning up "offline-crio-714233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-714233
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-714233: (4.572124695s)
--- PASS: TestOffline (87.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-608450
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-608450: exit status 85 (75.144893ms)

                                                
                                                
-- stdout --
	* Profile "addons-608450" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-608450"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-608450
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-608450: exit status 85 (76.070261ms)

                                                
                                                
-- stdout --
	* Profile "addons-608450" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-608450"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (134.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-608450 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-608450 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m14.924404392s)
--- PASS: TestAddons/Setup (134.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 13.488782ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-rkzl5" [1fcdf9f1-94c3-44b6-90c7-f65016ba020b] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005538476s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hbq4v" [df8417ba-d80c-4116-a735-7d5e7a4728b8] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006456515s
addons_test.go:340: (dbg) Run:  kubectl --context addons-608450 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-608450 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-608450 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.532511252s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 ip
2024/01/08 22:54:32 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.55s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rw9hw" [1a0440de-4f78-4b3c-9348-8c02606f0366] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003732855s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-608450
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-608450: (5.818084835s)
--- PASS: TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.417815ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-s42rt" [a1314306-1372-43ec-bc21-115b88b40633] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004870241s
addons_test.go:415: (dbg) Run:  kubectl --context addons-608450 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.67s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.13s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 14.142446ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-jkj4j" [4e5dde9a-5f31-41be-b435-cbe1564e4068] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005643435s
addons_test.go:473: (dbg) Run:  kubectl --context addons-608450 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-608450 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.570324719s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.13s)

                                                
                                    
x
+
TestAddons/parallel/CSI (95.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 15.046533ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-608450 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-608450 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [775ec6a9-62cd-4992-936d-89459f8a7a7b] Pending
helpers_test.go:344: "task-pv-pod" [775ec6a9-62cd-4992-936d-89459f8a7a7b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [775ec6a9-62cd-4992-936d-89459f8a7a7b] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004322745s
addons_test.go:584: (dbg) Run:  kubectl --context addons-608450 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-608450 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-608450 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-608450 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-608450 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-608450 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-608450 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [da49b2ce-3dbc-4e7b-b6d3-bc6506ee6136] Pending
helpers_test.go:344: "task-pv-pod-restore" [da49b2ce-3dbc-4e7b-b6d3-bc6506ee6136] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [da49b2ce-3dbc-4e7b-b6d3-bc6506ee6136] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003249511s
addons_test.go:626: (dbg) Run:  kubectl --context addons-608450 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-608450 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-608450 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-608450 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.59009732s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (95.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-608450 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-608450 --alsologtostderr -v=1: (1.253943316s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-b8l48" [7ad6a11e-ee50-4f0c-a83a-4694ed6c1cd2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-b8l48" [7ad6a11e-ee50-4f0c-a83a-4694ed6c1cd2] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003551021s
--- PASS: TestAddons/parallel/Headlamp (12.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-qwwhk" [c37909f0-d784-4e7e-bd53-017f41c73f99] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004054223s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-608450
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-608450 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-608450 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608450 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fb4d4602-8fbf-4296-822f-12de684c28ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fb4d4602-8fbf-4296-822f-12de684c28ff] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fb4d4602-8fbf-4296-822f-12de684c28ff] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004389295s
addons_test.go:891: (dbg) Run:  kubectl --context addons-608450 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 ssh "cat /opt/local-path-provisioner/pvc-3ba3a57c-5f41-4761-a694-297c1dadc482_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-608450 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-608450 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-608450 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-608450 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.225586108s)
--- PASS: TestAddons/parallel/LocalPath (56.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9fbl6" [c5f6cd8f-ab46-4842-a3a0-32b3d1ad0604] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004607821s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-608450
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-wqlww" [578e3e8c-b1bd-48c4-8d52-5eb89a6db258] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.067594367s
--- PASS: TestAddons/parallel/Yakd (6.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-608450 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-608450 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-608450
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-608450: (11.921256138s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-608450
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-608450
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-608450
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (27.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-550515 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-550515 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.026300531s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-550515 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-550515 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-550515 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-550515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-550515
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-550515: (2.049450975s)
--- PASS: TestCertOptions (27.79s)

                                                
                                    
x
+
TestCertExpiration (224.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-804190 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-804190 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.048617594s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-804190 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-804190 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.433947056s)
helpers_test.go:175: Cleaning up "cert-expiration-804190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-804190
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-804190: (2.43180744s)
--- PASS: TestCertExpiration (224.92s)

                                                
                                    
x
+
TestForceSystemdFlag (28.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-261344 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-261344 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.480137011s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-261344 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-261344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-261344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-261344: (2.431404202s)
--- PASS: TestForceSystemdFlag (28.21s)

                                                
                                    
x
+
TestForceSystemdEnv (43.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-732650 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-732650 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.715788423s)
helpers_test.go:175: Cleaning up "force-systemd-env-732650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-732650
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-732650: (4.036923438s)
--- PASS: TestForceSystemdEnv (43.75s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.49s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.49s)

                                                
                                    
x
+
TestErrorSpam/setup (24.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-549398 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-549398 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-549398 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-549398 --driver=docker  --container-runtime=crio: (24.660028756s)
--- PASS: TestErrorSpam/setup (24.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 stop: (1.206414221s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-549398 --log_dir /tmp/nospam-549398 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17830-321683/.minikube/files/etc/test/nested/copy/328384/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688728 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0108 22:59:17.536803  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 22:59:17.542812  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 22:59:17.553074  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 22:59:17.573347  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 22:59:17.613617  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 22:59:17.693980  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 22:59:17.854454  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 22:59:18.174787  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 22:59:18.815839  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 22:59:20.096976  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-688728 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m10.065146198s)
--- PASS: TestFunctional/serial/StartWithProxy (70.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688728 --alsologtostderr -v=8
E0108 22:59:22.658178  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 22:59:27.778932  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 22:59:38.019369  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-688728 --alsologtostderr -v=8: (36.289320649s)
functional_test.go:659: soft start took 36.290059773s for "functional-688728" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-688728 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 cache add registry.k8s.io/pause:3.3
E0108 22:59:58.500172  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-688728 /tmp/TestFunctionalserialCacheCmdcacheadd_local1225073909/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 cache add minikube-local-cache-test:functional-688728
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-688728 cache add minikube-local-cache-test:functional-688728: (4.531821273s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 cache delete minikube-local-cache-test:functional-688728
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-688728
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688728 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.167271ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 kubectl -- --context functional-688728 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-688728 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.73s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688728 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-688728 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.73169038s)
functional_test.go:757: restart took 29.731816706s for "functional-688728" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (29.73s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-688728 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-688728 logs: (1.382127301s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 logs --file /tmp/TestFunctionalserialLogsFileCmd1624793723/001/logs.txt
E0108 23:00:39.460755  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-688728 logs --file /tmp/TestFunctionalserialLogsFileCmd1624793723/001/logs.txt: (1.402641618s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-688728 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-688728
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-688728: exit status 115 (350.097058ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30733 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-688728 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688728 config get cpus: exit status 14 (73.15618ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688728 config get cpus: exit status 14 (77.388613ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-688728 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-688728 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 363782: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688728 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-688728 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (239.804577ms)

                                                
                                                
-- stdout --
	* [functional-688728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:01:19.382926  363810 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:01:19.383129  363810 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:01:19.383152  363810 out.go:309] Setting ErrFile to fd 2...
	I0108 23:01:19.383171  363810 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:01:19.383555  363810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	I0108 23:01:19.391728  363810 out.go:303] Setting JSON to false
	I0108 23:01:19.396469  363810 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13411,"bootTime":1704741468,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:01:19.396581  363810 start.go:138] virtualization: kvm guest
	I0108 23:01:19.398738  363810 out.go:177] * [functional-688728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:01:19.400778  363810 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:01:19.400832  363810 notify.go:220] Checking for updates...
	I0108 23:01:19.403299  363810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:01:19.404768  363810 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:01:19.407713  363810 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	I0108 23:01:19.409408  363810 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:01:19.410986  363810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:01:19.413133  363810 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:01:19.413907  363810 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:01:19.439522  363810 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 23:01:19.439631  363810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:01:19.500098  363810 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-08 23:01:19.491557239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:01:19.500256  363810 docker.go:295] overlay module found
	I0108 23:01:19.502263  363810 out.go:177] * Using the docker driver based on existing profile
	I0108 23:01:19.503660  363810 start.go:298] selected driver: docker
	I0108 23:01:19.503681  363810 start.go:902] validating driver "docker" against &{Name:functional-688728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-688728 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:01:19.503801  363810 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:01:19.506126  363810 out.go:177] 
	W0108 23:01:19.507826  363810 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 23:01:19.509268  363810 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688728 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688728 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-688728 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (219.412718ms)

                                                
                                                
-- stdout --
	* [functional-688728] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:01:19.124700  363701 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:01:19.124855  363701 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:01:19.124868  363701 out.go:309] Setting ErrFile to fd 2...
	I0108 23:01:19.124876  363701 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:01:19.125185  363701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	I0108 23:01:19.125790  363701 out.go:303] Setting JSON to false
	I0108 23:01:19.126904  363701 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13411,"bootTime":1704741468,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:01:19.126974  363701 start.go:138] virtualization: kvm guest
	I0108 23:01:19.129456  363701 out.go:177] * [functional-688728] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0108 23:01:19.130810  363701 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:01:19.132245  363701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:01:19.130857  363701 notify.go:220] Checking for updates...
	I0108 23:01:19.133943  363701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:01:19.135248  363701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	I0108 23:01:19.136377  363701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:01:19.137520  363701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:01:19.139272  363701 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:01:19.139711  363701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:01:19.178107  363701 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 23:01:19.178237  363701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:01:19.257046  363701 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-08 23:01:19.245325334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:01:19.257142  363701 docker.go:295] overlay module found
	I0108 23:01:19.260425  363701 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0108 23:01:19.262706  363701 start.go:298] selected driver: docker
	I0108 23:01:19.262728  363701 start.go:902] validating driver "docker" against &{Name:functional-688728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-688728 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:01:19.262862  363701 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:01:19.265630  363701 out.go:177] 
	W0108 23:01:19.267077  363701 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 23:01:19.268906  363701 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-688728 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-688728 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-zt97g" [58b61537-ac37-4e78-afa2-5209d4400606] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-zt97g" [58b61537-ac37-4e78-afa2-5209d4400606] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.00461671s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30356
functional_test.go:1674: http://192.168.49.2:30356: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-zt97g

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30356
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [75c6e2c4-44b8-417c-8290-7d3c30022c72] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.007623057s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-688728 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-688728 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-688728 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-688728 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-688728 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [062705b1-9cb7-4460-8c19-ed02401b2f1f] Pending
helpers_test.go:344: "sp-pod" [062705b1-9cb7-4460-8c19-ed02401b2f1f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [062705b1-9cb7-4460-8c19-ed02401b2f1f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.004193721s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-688728 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-688728 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-688728 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c5c2eec0-03f9-44d8-807d-5d12820d2b10] Pending
helpers_test.go:344: "sp-pod" [c5c2eec0-03f9-44d8-807d-5d12820d2b10] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c5c2eec0-03f9-44d8-807d-5d12820d2b10] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007196098s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-688728 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh -n functional-688728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 cp functional-688728:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd305769034/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh -n functional-688728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh -n functional-688728 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-688728 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-5tr5f" [369d0040-dd4a-486a-af8b-0be6979ff7ac] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-5tr5f" [369d0040-dd4a-486a-af8b-0be6979ff7ac] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.00408914s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-688728 exec mysql-859648c796-5tr5f -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-688728 exec mysql-859648c796-5tr5f -- mysql -ppassword -e "show databases;": exit status 1 (161.561704ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-688728 exec mysql-859648c796-5tr5f -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-688728 exec mysql-859648c796-5tr5f -- mysql -ppassword -e "show databases;": exit status 1 (130.3239ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-688728 exec mysql-859648c796-5tr5f -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-688728 exec mysql-859648c796-5tr5f -- mysql -ppassword -e "show databases;": exit status 1 (134.515493ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-688728 exec mysql-859648c796-5tr5f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.80s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/328384/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "sudo cat /etc/test/nested/copy/328384/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/328384.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "sudo cat /etc/ssl/certs/328384.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/328384.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "sudo cat /usr/share/ca-certificates/328384.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3283842.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "sudo cat /etc/ssl/certs/3283842.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/3283842.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "sudo cat /usr/share/ca-certificates/3283842.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-688728 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688728 ssh "sudo systemctl is-active docker": exit status 1 (290.063812ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688728 ssh "sudo systemctl is-active containerd": exit status 1 (334.600569ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-688728 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-688728
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-688728 image ls --format short --alsologtostderr:
I0108 23:01:21.169375  365034 out.go:296] Setting OutFile to fd 1 ...
I0108 23:01:21.169672  365034 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:01:21.169687  365034 out.go:309] Setting ErrFile to fd 2...
I0108 23:01:21.169692  365034 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:01:21.169987  365034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
I0108 23:01:21.170832  365034 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:01:21.170978  365034 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:01:21.171584  365034 cli_runner.go:164] Run: docker container inspect functional-688728 --format={{.State.Status}}
I0108 23:01:21.190601  365034 ssh_runner.go:195] Run: systemctl --version
I0108 23:01:21.190666  365034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-688728
I0108 23:01:21.208758  365034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/functional-688728/id_rsa Username:docker}
I0108 23:01:21.308010  365034 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-688728 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| docker.io/library/nginx                 | latest             | d453dd892d935 | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | alpine             | 529b5644c430c | 44.4MB |
| gcr.io/google-containers/addon-resizer  | functional-688728  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-688728 image ls --format table --alsologtostderr:
I0108 23:01:21.782261  365336 out.go:296] Setting OutFile to fd 1 ...
I0108 23:01:21.782490  365336 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:01:21.782519  365336 out.go:309] Setting ErrFile to fd 2...
I0108 23:01:21.782544  365336 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:01:21.782835  365336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
I0108 23:01:21.783597  365336 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:01:21.783802  365336 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:01:21.784297  365336 cli_runner.go:164] Run: docker container inspect functional-688728 --format={{.State.Status}}
I0108 23:01:21.805170  365336 ssh_runner.go:195] Run: systemctl --version
I0108 23:01:21.805244  365336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-688728
I0108 23:01:21.823966  365336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/functional-688728/id_rsa Username:docker}
I0108 23:01:21.949442  365336 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-688728 image ls --format json --alsologtostderr:
[{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":["docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44405005"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-688728"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5
"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fab
fd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io
/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@s
ha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"re
poTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"82e4c8a736a4
fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-688728 image ls --format json --alsologtostderr:
I0108 23:01:21.456642  365189 out.go:296] Setting OutFile to fd 1 ...
I0108 23:01:21.456820  365189 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:01:21.456834  365189 out.go:309] Setting ErrFile to fd 2...
I0108 23:01:21.456848  365189 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:01:21.457227  365189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
I0108 23:01:21.457962  365189 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:01:21.458102  365189 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:01:21.458633  365189 cli_runner.go:164] Run: docker container inspect functional-688728 --format={{.State.Status}}
I0108 23:01:21.477195  365189 ssh_runner.go:195] Run: systemctl --version
I0108 23:01:21.477269  365189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-688728
I0108 23:01:21.495366  365189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/functional-688728/id_rsa Username:docker}
I0108 23:01:21.620305  365189 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-688728 image ls --format yaml --alsologtostderr:
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests:
- docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "44405005"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-688728
size: "34114467"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-688728 image ls --format yaml --alsologtostderr:
I0108 23:01:22.083645  365489 out.go:296] Setting OutFile to fd 1 ...
I0108 23:01:22.083747  365489 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:01:22.083755  365489 out.go:309] Setting ErrFile to fd 2...
I0108 23:01:22.083760  365489 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:01:22.083994  365489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
I0108 23:01:22.084696  365489 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:01:22.084875  365489 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:01:22.085550  365489 cli_runner.go:164] Run: docker container inspect functional-688728 --format={{.State.Status}}
I0108 23:01:22.109915  365489 ssh_runner.go:195] Run: systemctl --version
I0108 23:01:22.109969  365489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-688728
I0108 23:01:22.128804  365489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/functional-688728/id_rsa Username:docker}
I0108 23:01:22.248892  365489 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688728 ssh pgrep buildkitd: exit status 1 (337.985593ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image build -t localhost/my-image:functional-688728 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-688728 image build -t localhost/my-image:functional-688728 testdata/build --alsologtostderr: (2.876760038s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-688728 image build -t localhost/my-image:functional-688728 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4fa083187c4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-688728
--> 965644374fa
Successfully tagged localhost/my-image:functional-688728
965644374fa7a42c8c4cab6bba8d9810b415f2d98a07f3c8b670fa3afe62d90f
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-688728 image build -t localhost/my-image:functional-688728 testdata/build --alsologtostderr:
I0108 23:01:22.728870  365765 out.go:296] Setting OutFile to fd 1 ...
I0108 23:01:22.728986  365765 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:01:22.728994  365765 out.go:309] Setting ErrFile to fd 2...
I0108 23:01:22.728998  365765 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:01:22.729239  365765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
I0108 23:01:22.729927  365765 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:01:22.730597  365765 config.go:182] Loaded profile config "functional-688728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:01:22.731279  365765 cli_runner.go:164] Run: docker container inspect functional-688728 --format={{.State.Status}}
I0108 23:01:22.750255  365765 ssh_runner.go:195] Run: systemctl --version
I0108 23:01:22.750309  365765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-688728
I0108 23:01:22.768528  365765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/functional-688728/id_rsa Username:docker}
I0108 23:01:22.864004  365765 build_images.go:151] Building image from path: /tmp/build.3232535855.tar
I0108 23:01:22.864091  365765 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 23:01:22.873784  365765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3232535855.tar
I0108 23:01:22.877819  365765 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3232535855.tar: stat -c "%s %y" /var/lib/minikube/build/build.3232535855.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3232535855.tar': No such file or directory
I0108 23:01:22.877853  365765 ssh_runner.go:362] scp /tmp/build.3232535855.tar --> /var/lib/minikube/build/build.3232535855.tar (3072 bytes)
I0108 23:01:22.905950  365765 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3232535855
I0108 23:01:22.944074  365765 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3232535855 -xf /var/lib/minikube/build/build.3232535855.tar
I0108 23:01:22.955281  365765 crio.go:297] Building image: /var/lib/minikube/build/build.3232535855
I0108 23:01:22.955368  365765 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-688728 /var/lib/minikube/build/build.3232535855 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0108 23:01:25.506249  365765 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-688728 /var/lib/minikube/build/build.3232535855 --cgroup-manager=cgroupfs: (2.550849035s)
I0108 23:01:25.506322  365765 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3232535855
I0108 23:01:25.515033  365765 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3232535855.tar
I0108 23:01:25.523119  365765 build_images.go:207] Built localhost/my-image:functional-688728 from /tmp/build.3232535855.tar
I0108 23:01:25.523157  365765 build_images.go:123] succeeded building to: functional-688728
I0108 23:01:25.523162  365765 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image ls
2024/01/08 23:01:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.047730247s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-688728
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-688728 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-688728 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-688728 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-688728 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 359417: os: process already finished
helpers_test.go:502: unable to terminate pid 359220: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-688728 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-688728 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f3daa3b1-9ead-4801-8776-f8d038ac0ff4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f3daa3b1-9ead-4801-8776-f8d038ac0ff4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.004083559s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image load --daemon gcr.io/google-containers/addon-resizer:functional-688728 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-688728 image load --daemon gcr.io/google-containers/addon-resizer:functional-688728 --alsologtostderr: (4.522686184s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-688728
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image load --daemon gcr.io/google-containers/addon-resizer:functional-688728 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-688728 image load --daemon gcr.io/google-containers/addon-resizer:functional-688728 --alsologtostderr: (4.468899797s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-688728 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.235.29 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-688728 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image save gcr.io/google-containers/addon-resizer:functional-688728 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image rm gcr.io/google-containers/addon-resizer:functional-688728 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-688728 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.026380292s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-688728 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-688728 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-5gx7m" [9faaa957-08f0-46e1-9894-be82190cb70d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-5gx7m" [9faaa957-08f0-46e1-9894-be82190cb70d] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004093704s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-688728
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 image save --daemon gcr.io/google-containers/addon-resizer:functional-688728 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-688728 image save --daemon gcr.io/google-containers/addon-resizer:functional-688728 --alsologtostderr: (2.442319974s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-688728
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "302.174117ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "67.338748ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "310.916097ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "68.728359ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-688728 /tmp/TestFunctionalparallelMountCmdany-port3128594335/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704754872108664280" to /tmp/TestFunctionalparallelMountCmdany-port3128594335/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704754872108664280" to /tmp/TestFunctionalparallelMountCmdany-port3128594335/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704754872108664280" to /tmp/TestFunctionalparallelMountCmdany-port3128594335/001/test-1704754872108664280
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688728 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (305.998277ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 23:01 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 23:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 23:01 test-1704754872108664280
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh cat /mount-9p/test-1704754872108664280
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-688728 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9fdc626d-e7a1-4ba2-8c44-be1a4c5b61e0] Pending
helpers_test.go:344: "busybox-mount" [9fdc626d-e7a1-4ba2-8c44-be1a4c5b61e0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9fdc626d-e7a1-4ba2-8c44-be1a4c5b61e0] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9fdc626d-e7a1-4ba2-8c44-be1a4c5b61e0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004158047s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-688728 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688728 /tmp/TestFunctionalparallelMountCmdany-port3128594335/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-688728 /tmp/TestFunctionalparallelMountCmdspecific-port2400105623/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688728 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.292254ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688728 /tmp/TestFunctionalparallelMountCmdspecific-port2400105623/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688728 ssh "sudo umount -f /mount-9p": exit status 1 (315.623752ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-688728 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688728 /tmp/TestFunctionalparallelMountCmdspecific-port2400105623/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-688728 service list -o json: (1.836244173s)
functional_test.go:1493: Took "1.836374704s" to run "out/minikube-linux-amd64 -p functional-688728 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-688728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3375465221/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-688728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3375465221/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-688728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3375465221/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688728 ssh "findmnt -T" /mount1: exit status 1 (439.62011ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-688728 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3375465221/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3375465221/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3375465221/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31652
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-688728 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31652
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.68s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-688728
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-688728
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-688728
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (65.74s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-713577 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0108 23:02:01.382709  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-713577 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m5.738836575s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (65.74s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-713577 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-713577 addons enable ingress --alsologtostderr -v=5: (9.368317801s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-713577 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-989799 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0108 23:05:46.801025  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
E0108 23:05:49.361290  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
E0108 23:05:54.481934  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
E0108 23:06:04.722245  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
E0108 23:06:25.202852  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-989799 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m9.269712572s)
--- PASS: TestJSONOutput/start/Command (69.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-989799 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-989799 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-989799 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-989799 --output=json --user=testUser: (5.790401296s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-648587 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-648587 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.495346ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8bb509cf-8009-493d-b12a-565e362e48f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-648587] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fb7ffd0-1d06-4176-bf68-c18f8465258f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17830"}}
	{"specversion":"1.0","id":"22e064c9-c28e-424e-b6fe-b5c94dc10943","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"390fe0cb-0438-43c3-a7d5-b1c9bc4dad4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig"}}
	{"specversion":"1.0","id":"e655171a-eadb-4648-9119-f736cfa42e05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube"}}
	{"specversion":"1.0","id":"3045c61e-f765-4749-baeb-8ea10ef293ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5dc9072b-d707-4b8c-935a-32d17f9be44e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b66f59d3-e525-4517-a01e-5ef0de2b2e86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-648587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-648587
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-450660 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-450660 --network=: (29.742401974s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-450660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-450660
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-450660: (2.057300486s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.82s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.58s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-882280 --network=bridge
E0108 23:07:47.479543  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:07:47.484868  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:07:47.495235  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:07:47.515638  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:07:47.556126  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:07:47.636489  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:07:47.796943  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:07:48.117556  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:07:48.758514  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:07:50.038844  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:07:52.599784  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:07:57.720921  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-882280 --network=bridge: (23.669902299s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-882280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-882280
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-882280: (1.895831079s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.58s)

                                                
                                    
x
+
TestKicExistingNetwork (24.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-285567 --network=existing-network
E0108 23:08:07.961832  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:08:28.086067  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
E0108 23:08:28.442531  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-285567 --network=existing-network: (22.91444115s)
helpers_test.go:175: Cleaning up "existing-network-285567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-285567
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-285567: (1.886112871s)
--- PASS: TestKicExistingNetwork (24.94s)

                                                
                                    
x
+
TestKicCustomSubnet (28.26s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-474335 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-474335 --subnet=192.168.60.0/24: (26.1574952s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-474335 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-474335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-474335
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-474335: (2.086163085s)
--- PASS: TestKicCustomSubnet (28.26s)

                                                
                                    
x
+
TestKicStaticIP (24.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-428755 --static-ip=192.168.200.200
E0108 23:09:09.403240  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:09:17.536136  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-428755 --static-ip=192.168.200.200: (22.50070277s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-428755 ip
helpers_test.go:175: Cleaning up "static-ip-428755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-428755
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-428755: (2.074854835s)
--- PASS: TestKicStaticIP (24.72s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (54.62s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-331352 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-331352 --driver=docker  --container-runtime=crio: (25.127690289s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-335089 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-335089 --driver=docker  --container-runtime=crio: (24.244761701s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-331352
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-335089
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-335089" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-335089
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-335089: (1.923535657s)
helpers_test.go:175: Cleaning up "first-331352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-331352
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-331352: (2.253697543s)
--- PASS: TestMinikubeProfile (54.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-865282 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-865282 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.209109977s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-865282 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-884208 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0108 23:10:31.323436  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-884208 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.127561534s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-884208 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-865282 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-865282 --alsologtostderr -v=5: (1.64102872s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-884208 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-884208
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-884208: (1.228427534s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-884208
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-884208: (6.021613554s)
E0108 23:10:44.241749  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (7.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-884208 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (118.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-659947 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0108 23:11:11.927160  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-659947 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m57.815557131s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (118.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- rollout status deployment/busybox
E0108 23:12:47.479657  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-659947 -- rollout status deployment/busybox: (2.065626596s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-d8rhc -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-wpl2n -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-d8rhc -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-wpl2n -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-d8rhc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-659947 -- exec busybox-5bc68d56bd-wpl2n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-659947 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-659947 -v 3 --alsologtostderr: (18.883824916s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.52s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-659947 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp testdata/cp-test.txt multinode-659947:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp multinode-659947:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3604650319/001/cp-test_multinode-659947.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp multinode-659947:/home/docker/cp-test.txt multinode-659947-m02:/home/docker/cp-test_multinode-659947_multinode-659947-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m02 "sudo cat /home/docker/cp-test_multinode-659947_multinode-659947-m02.txt"
E0108 23:13:15.164012  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp multinode-659947:/home/docker/cp-test.txt multinode-659947-m03:/home/docker/cp-test_multinode-659947_multinode-659947-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m03 "sudo cat /home/docker/cp-test_multinode-659947_multinode-659947-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp testdata/cp-test.txt multinode-659947-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp multinode-659947-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3604650319/001/cp-test_multinode-659947-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp multinode-659947-m02:/home/docker/cp-test.txt multinode-659947:/home/docker/cp-test_multinode-659947-m02_multinode-659947.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947 "sudo cat /home/docker/cp-test_multinode-659947-m02_multinode-659947.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp multinode-659947-m02:/home/docker/cp-test.txt multinode-659947-m03:/home/docker/cp-test_multinode-659947-m02_multinode-659947-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m03 "sudo cat /home/docker/cp-test_multinode-659947-m02_multinode-659947-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp testdata/cp-test.txt multinode-659947-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp multinode-659947-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3604650319/001/cp-test_multinode-659947-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp multinode-659947-m03:/home/docker/cp-test.txt multinode-659947:/home/docker/cp-test_multinode-659947-m03_multinode-659947.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947 "sudo cat /home/docker/cp-test_multinode-659947-m03_multinode-659947.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 cp multinode-659947-m03:/home/docker/cp-test.txt multinode-659947-m02:/home/docker/cp-test_multinode-659947-m03_multinode-659947-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 ssh -n multinode-659947-m02 "sudo cat /home/docker/cp-test_multinode-659947-m03_multinode-659947-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-659947 node stop m03: (1.204000575s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-659947 status: exit status 7 (481.659643ms)

                                                
                                                
-- stdout --
	multinode-659947
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-659947-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-659947-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-659947 status --alsologtostderr: exit status 7 (470.899818ms)

                                                
                                                
-- stdout --
	multinode-659947
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-659947-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-659947-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:13:24.097236  423840 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:13:24.097353  423840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:13:24.097358  423840 out.go:309] Setting ErrFile to fd 2...
	I0108 23:13:24.097362  423840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:13:24.097571  423840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	I0108 23:13:24.097766  423840 out.go:303] Setting JSON to false
	I0108 23:13:24.097823  423840 mustload.go:65] Loading cluster: multinode-659947
	I0108 23:13:24.097914  423840 notify.go:220] Checking for updates...
	I0108 23:13:24.098227  423840 config.go:182] Loaded profile config "multinode-659947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:13:24.098243  423840 status.go:255] checking status of multinode-659947 ...
	I0108 23:13:24.098726  423840 cli_runner.go:164] Run: docker container inspect multinode-659947 --format={{.State.Status}}
	I0108 23:13:24.115638  423840 status.go:330] multinode-659947 host status = "Running" (err=<nil>)
	I0108 23:13:24.115662  423840 host.go:66] Checking if "multinode-659947" exists ...
	I0108 23:13:24.115936  423840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-659947
	I0108 23:13:24.133247  423840 host.go:66] Checking if "multinode-659947" exists ...
	I0108 23:13:24.133519  423840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:13:24.133570  423840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947
	I0108 23:13:24.149568  423840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947/id_rsa Username:docker}
	I0108 23:13:24.240303  423840 ssh_runner.go:195] Run: systemctl --version
	I0108 23:13:24.244092  423840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:13:24.254034  423840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:13:24.304519  423840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-08 23:13:24.296281042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:13:24.305071  423840 kubeconfig.go:92] found "multinode-659947" server: "https://192.168.58.2:8443"
	I0108 23:13:24.305095  423840 api_server.go:166] Checking apiserver status ...
	I0108 23:13:24.305127  423840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:13:24.315382  423840 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	I0108 23:13:24.324003  423840 api_server.go:182] apiserver freezer: "11:freezer:/docker/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc/crio/crio-3d4eff1707940b68e93b90a2576fc893e8801332608d067bdb425a14d9677da4"
	I0108 23:13:24.324077  423840 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5d2b1864fb291cb87831fe263a8dbae2ec490d0c27f2d6deb1b4fd6d0f2d60bc/crio/crio-3d4eff1707940b68e93b90a2576fc893e8801332608d067bdb425a14d9677da4/freezer.state
	I0108 23:13:24.331990  423840 api_server.go:204] freezer state: "THAWED"
	I0108 23:13:24.332027  423840 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 23:13:24.336213  423840 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 23:13:24.336237  423840 status.go:421] multinode-659947 apiserver status = Running (err=<nil>)
	I0108 23:13:24.336247  423840 status.go:257] multinode-659947 status: &{Name:multinode-659947 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 23:13:24.336265  423840 status.go:255] checking status of multinode-659947-m02 ...
	I0108 23:13:24.336547  423840 cli_runner.go:164] Run: docker container inspect multinode-659947-m02 --format={{.State.Status}}
	I0108 23:13:24.353633  423840 status.go:330] multinode-659947-m02 host status = "Running" (err=<nil>)
	I0108 23:13:24.353658  423840 host.go:66] Checking if "multinode-659947-m02" exists ...
	I0108 23:13:24.353941  423840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-659947-m02
	I0108 23:13:24.370958  423840 host.go:66] Checking if "multinode-659947-m02" exists ...
	I0108 23:13:24.371254  423840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:13:24.371322  423840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-659947-m02
	I0108 23:13:24.387470  423840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17830-321683/.minikube/machines/multinode-659947-m02/id_rsa Username:docker}
	I0108 23:13:24.480239  423840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:13:24.490568  423840 status.go:257] multinode-659947-m02 status: &{Name:multinode-659947-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 23:13:24.490610  423840 status.go:255] checking status of multinode-659947-m03 ...
	I0108 23:13:24.490947  423840 cli_runner.go:164] Run: docker container inspect multinode-659947-m03 --format={{.State.Status}}
	I0108 23:13:24.506985  423840 status.go:330] multinode-659947-m03 host status = "Stopped" (err=<nil>)
	I0108 23:13:24.507008  423840 status.go:343] host is not running, skipping remaining checks
	I0108 23:13:24.507013  423840 status.go:257] multinode-659947-m03 status: &{Name:multinode-659947-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-659947 node start m03 --alsologtostderr: (10.087063094s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (117.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-659947
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-659947
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-659947: (24.892297503s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-659947 --wait=true -v=8 --alsologtostderr
E0108 23:14:17.535893  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-659947 --wait=true -v=8 --alsologtostderr: (1m32.105872192s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-659947
--- PASS: TestMultiNode/serial/RestartKeepsNodes (117.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-659947 node delete m03: (4.13787786s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 stop
E0108 23:15:40.584330  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 23:15:44.242210  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-659947 stop: (23.655399877s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-659947 status: exit status 7 (101.770438ms)

                                                
                                                
-- stdout --
	multinode-659947
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-659947-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-659947 status --alsologtostderr: exit status 7 (101.305318ms)

                                                
                                                
-- stdout --
	multinode-659947
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-659947-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:16:00.981923  433788 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:16:00.982189  433788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:16:00.982199  433788 out.go:309] Setting ErrFile to fd 2...
	I0108 23:16:00.982204  433788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:16:00.982384  433788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	I0108 23:16:00.982545  433788 out.go:303] Setting JSON to false
	I0108 23:16:00.982590  433788 mustload.go:65] Loading cluster: multinode-659947
	I0108 23:16:00.982685  433788 notify.go:220] Checking for updates...
	I0108 23:16:00.983008  433788 config.go:182] Loaded profile config "multinode-659947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:16:00.983023  433788 status.go:255] checking status of multinode-659947 ...
	I0108 23:16:00.983534  433788 cli_runner.go:164] Run: docker container inspect multinode-659947 --format={{.State.Status}}
	I0108 23:16:01.004318  433788 status.go:330] multinode-659947 host status = "Stopped" (err=<nil>)
	I0108 23:16:01.004350  433788 status.go:343] host is not running, skipping remaining checks
	I0108 23:16:01.004367  433788 status.go:257] multinode-659947 status: &{Name:multinode-659947 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 23:16:01.004417  433788 status.go:255] checking status of multinode-659947-m02 ...
	I0108 23:16:01.004728  433788 cli_runner.go:164] Run: docker container inspect multinode-659947-m02 --format={{.State.Status}}
	I0108 23:16:01.021513  433788 status.go:330] multinode-659947-m02 host status = "Stopped" (err=<nil>)
	I0108 23:16:01.021537  433788 status.go:343] host is not running, skipping remaining checks
	I0108 23:16:01.021542  433788 status.go:257] multinode-659947-m02 status: &{Name:multinode-659947-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-659947 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-659947 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.4379385s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-659947 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-659947
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-659947-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-659947-m02 --driver=docker  --container-runtime=crio: exit status 14 (86.726166ms)

                                                
                                                
-- stdout --
	* [multinode-659947-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-659947-m02' is duplicated with machine name 'multinode-659947-m02' in profile 'multinode-659947'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-659947-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-659947-m03 --driver=docker  --container-runtime=crio: (23.89588876s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-659947
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-659947: exit status 80 (290.524536ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-659947
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-659947-m03 already exists in multinode-659947-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-659947-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-659947-m03: (1.903858018s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.24s)

                                                
                                    
x
+
TestPreload (150.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-143901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-143901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m13.100474872s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-143901 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-143901 image pull gcr.io/k8s-minikube/busybox: (1.056681643s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-143901
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-143901: (5.712455799s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-143901 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0108 23:19:17.536899  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-143901 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m7.859937134s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-143901 image list
helpers_test.go:175: Cleaning up "test-preload-143901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-143901
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-143901: (2.314762694s)
--- PASS: TestPreload (150.28s)

                                                
                                    
x
+
TestScheduledStopUnix (97.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-637044 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-637044 --memory=2048 --driver=docker  --container-runtime=crio: (21.565162219s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-637044 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-637044 -n scheduled-stop-637044
E0108 23:20:44.241534  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-637044 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-637044 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-637044 -n scheduled-stop-637044
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-637044
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-637044 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-637044
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-637044: exit status 7 (83.202262ms)

                                                
                                                
-- stdout --
	scheduled-stop-637044
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-637044 -n scheduled-stop-637044
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-637044 -n scheduled-stop-637044: exit status 7 (80.850197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-637044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-637044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-637044: (4.451818791s)
--- PASS: TestScheduledStopUnix (97.53s)

                                                
                                    
x
+
TestInsufficientStorage (13.23s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-487092 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E0108 23:22:07.287725  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-487092 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.802009308s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"edb1010d-7308-4a54-b48a-012b02d9a8bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-487092] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c21b7929-e123-4ac6-8367-6ba7d2642919","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17830"}}
	{"specversion":"1.0","id":"a893af46-d788-4b15-9a8b-d44ae00573e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2a507aa6-bf4d-437f-8abf-f20e48aae204","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig"}}
	{"specversion":"1.0","id":"a1020bee-c649-4820-a960-093fbeb251ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube"}}
	{"specversion":"1.0","id":"d88d641d-e00c-4908-9dfd-2a235161a8ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"56dc68a0-f0c5-476b-9c4e-fe463a7f4689","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6bf02a7d-510a-4c80-a3a3-edb88936b481","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8fc1b111-54de-4275-aa27-e71ed811bba1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"24b67885-28c3-4ac3-aa16-232ffcd27f5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e83a4af6-59c0-41ed-b9f6-b622f70de3c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1ae2ccc4-ce60-43b7-b229-a17f42235155","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-487092 in cluster insufficient-storage-487092","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"131ee50d-1e4c-436a-a5a8-d88a13aeecaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704751654-17830 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1bc35c64-88fd-46e2-9634-523ee03edfa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc502741-cc13-4c45-92ad-af8f12ee56e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-487092 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-487092 --output=json --layout=cluster: exit status 7 (279.553837ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-487092","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-487092","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 23:22:10.970923  455048 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-487092" does not appear in /home/jenkins/minikube-integration/17830-321683/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-487092 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-487092 --output=json --layout=cluster: exit status 7 (277.252288ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-487092","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-487092","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 23:22:11.248113  455134 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-487092" does not appear in /home/jenkins/minikube-integration/17830-321683/kubeconfig
	E0108 23:22:11.258191  455134 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/insufficient-storage-487092/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-487092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-487092
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-487092: (1.868977447s)
--- PASS: TestInsufficientStorage (13.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (343.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-146344 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-146344 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.019637103s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-146344
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-146344: (3.43568933s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-146344 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-146344 status --format={{.Host}}: exit status 7 (91.464666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-146344 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0108 23:24:10.524716  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:24:17.536804  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-146344 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.189296125s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-146344 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-146344 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-146344 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (83.910151ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-146344] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-146344
	    minikube start -p kubernetes-upgrade-146344 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1463442 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-146344 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-146344 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-146344 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.835289914s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-146344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-146344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-146344: (2.172093725s)
--- PASS: TestKubernetesUpgrade (343.89s)

                                                
                                    
x
+
TestMissingContainerUpgrade (145.39s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.1720323317.exe start -p missing-upgrade-842047 --memory=2200 --driver=docker  --container-runtime=crio
E0108 23:22:47.479135  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.1720323317.exe start -p missing-upgrade-842047 --memory=2200 --driver=docker  --container-runtime=crio: (1m18.789128579s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-842047
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-842047
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-842047 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-842047 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.656816443s)
helpers_test.go:175: Cleaning up "missing-upgrade-842047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-842047
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-842047: (2.111618182s)
--- PASS: TestMissingContainerUpgrade (145.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685846 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-685846 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (108.41919ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-685846] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685846 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685846 --driver=docker  --container-runtime=crio: (39.808117632s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-685846 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685846 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685846 --no-kubernetes --driver=docker  --container-runtime=crio: (5.187009939s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-685846 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-685846 status -o json: exit status 2 (364.452982ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-685846","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-685846
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-685846: (4.31123327s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685846 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685846 --no-kubernetes --driver=docker  --container-runtime=crio: (6.063773148s)
--- PASS: TestNoKubernetes/serial/Start (6.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-685846 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-685846 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.163471ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-685846
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-685846: (1.260452578s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685846 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685846 --driver=docker  --container-runtime=crio: (7.459804021s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-685846 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-685846 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.738031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-874472
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.54s)

                                                
                                    
x
+
TestPause/serial/Start (49.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-056673 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0108 23:25:44.241170  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-056673 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.614981626s)
--- PASS: TestPause/serial/Start (49.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.26s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-056673 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-056673 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.233481196s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.26s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-056673 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-056673 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-056673 --output=json --layout=cluster: exit status 2 (350.236562ms)

                                                
                                                
-- stdout --
	{"Name":"pause-056673","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-056673","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-056673 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-056673 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.91s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-056673 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-056673 --alsologtostderr -v=5: (2.913956115s)
--- PASS: TestPause/serial/DeletePaused (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-981799 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-981799 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (188.346817ms)

                                                
                                                
-- stdout --
	* [false-981799] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:26:41.000770  516091 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:26:41.000973  516091 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:26:41.000981  516091 out.go:309] Setting ErrFile to fd 2...
	I0108 23:26:41.000986  516091 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:26:41.001201  516091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-321683/.minikube/bin
	I0108 23:26:41.001784  516091 out.go:303] Setting JSON to false
	I0108 23:26:41.003481  516091 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14933,"bootTime":1704741468,"procs":779,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:26:41.003557  516091 start.go:138] virtualization: kvm guest
	I0108 23:26:41.006465  516091 out.go:177] * [false-981799] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:26:41.008451  516091 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:26:41.008490  516091 notify.go:220] Checking for updates...
	I0108 23:26:41.009845  516091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:26:41.011210  516091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-321683/kubeconfig
	I0108 23:26:41.012984  516091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-321683/.minikube
	I0108 23:26:41.014424  516091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:26:41.016284  516091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:26:41.018198  516091 config.go:182] Loaded profile config "cert-expiration-804190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:26:41.018328  516091 config.go:182] Loaded profile config "kubernetes-upgrade-146344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 23:26:41.018470  516091 config.go:182] Loaded profile config "pause-056673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:26:41.018593  516091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:26:41.042240  516091 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 23:26:41.042397  516091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:26:41.104852  516091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:51 SystemTime:2024-01-08 23:26:41.094670912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 23:26:41.104951  516091 docker.go:295] overlay module found
	I0108 23:26:41.107359  516091 out.go:177] * Using the docker driver based on user configuration
	I0108 23:26:41.109092  516091 start.go:298] selected driver: docker
	I0108 23:26:41.109110  516091 start.go:902] validating driver "docker" against <nil>
	I0108 23:26:41.109122  516091 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:26:41.111254  516091 out.go:177] 
	W0108 23:26:41.112763  516091 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0108 23:26:41.114210  516091 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-981799 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-981799" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-981799" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:24:21 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-146344
contexts:
- context:
cluster: kubernetes-upgrade-146344
user: kubernetes-upgrade-146344
name: kubernetes-upgrade-146344
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-146344
user:
client-certificate: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/kubernetes-upgrade-146344/client.crt
client-key: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/kubernetes-upgrade-146344/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-981799

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-981799"

                                                
                                                
----------------------- debugLogs end: false-981799 [took: 6.10343s] --------------------------------
helpers_test.go:175: Cleaning up "false-981799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-981799
--- PASS: TestNetworkPlugins/group/false (6.53s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-056673
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-056673: exit status 1 (21.492075ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-056673: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (131.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-259817 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-259817 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m11.474487109s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (131.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-978791 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-978791 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m10.387913658s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-312206 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 23:27:47.479221  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-312206 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m12.599945239s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-978791 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8654ece8-841d-43ff-aeab-617e6b0008a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8654ece8-841d-43ff-aeab-617e6b0008a0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003574102s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-978791 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-978791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-978791 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-978791 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-978791 --alsologtostderr -v=3: (11.972618984s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-312206 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [127a2aef-93f1-43d7-86a6-39e0cb0d585f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [127a2aef-93f1-43d7-86a6-39e0cb0d585f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003240251s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-312206 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-312206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-312206 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-312206 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-312206 --alsologtostderr -v=3: (12.010837873s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-978791 -n no-preload-978791
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-978791 -n no-preload-978791: exit status 7 (109.679606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-978791 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (337.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-978791 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-978791 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m37.550622938s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-978791 -n no-preload-978791
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (337.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312206 -n embed-certs-312206
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312206 -n embed-certs-312206: exit status 7 (82.301145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-312206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (341.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-312206 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-312206 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m41.56713902s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312206 -n embed-certs-312206
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (341.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-259817 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [776cac9d-db5d-45bc-85d5-0bb76514c78e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [776cac9d-db5d-45bc-85d5-0bb76514c78e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004119896s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-259817 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-259817 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-259817 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-259817 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-259817 --alsologtostderr -v=3: (12.021460776s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-793038 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-793038 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (34.442828811s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-259817 -n old-k8s-version-259817
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-259817 -n old-k8s-version-259817: exit status 7 (82.606108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-259817 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (427.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-259817 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0108 23:29:17.536712  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-259817 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m6.749743767s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-259817 -n old-k8s-version-259817
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (427.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-793038 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-793038 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-793038 --alsologtostderr -v=3: (3.644919314s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-793038 -n newest-cni-793038
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-793038 -n newest-cni-793038: exit status 7 (116.016444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-793038 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-793038 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-793038 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (26.004231673s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-793038 -n newest-cni-793038
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-793038 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-793038 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-793038 -n newest-cni-793038
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-793038 -n newest-cni-793038: exit status 2 (316.775529ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-793038 -n newest-cni-793038
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-793038 -n newest-cni-793038: exit status 2 (325.552683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-793038 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-793038 -n newest-cni-793038
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-793038 -n newest-cni-793038
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-778350 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 23:30:44.242070  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-778350 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (37.7926364s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-778350 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6b4b2e23-2e78-4dc5-96c4-71b3022b4fc8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6b4b2e23-2e78-4dc5-96c4-71b3022b4fc8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00419615s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-778350 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-778350 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-778350 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-778350 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-778350 --alsologtostderr -v=3: (11.970951679s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-778350 -n default-k8s-diff-port-778350
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-778350 -n default-k8s-diff-port-778350: exit status 7 (80.936994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-778350 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (345.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-778350 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 23:32:20.584971  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
E0108 23:32:47.479781  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-778350 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m44.999761592s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-778350 -n default-k8s-diff-port-778350
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (345.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5vnk5" [fe568075-c27a-459b-91ed-729069feda51] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5vnk5" [fe568075-c27a-459b-91ed-729069feda51] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003984322s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5vnk5" [fe568075-c27a-459b-91ed-729069feda51] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004361483s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-978791 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tsfs4" [7bbd2845-e248-4422-8f03-d1036da5e690] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tsfs4" [7bbd2845-e248-4422-8f03-d1036da5e690] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004001469s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-978791 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-978791 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-978791 -n no-preload-978791
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-978791 -n no-preload-978791: exit status 2 (357.676715ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-978791 -n no-preload-978791
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-978791 -n no-preload-978791: exit status 2 (344.882752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-978791 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-978791 -n no-preload-978791
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-978791 -n no-preload-978791
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0108 23:34:17.536651  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/addons-608450/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m10.11456525s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tsfs4" [7bbd2845-e248-4422-8f03-d1036da5e690] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004799687s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-312206 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-312206 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-312206 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312206 -n embed-certs-312206
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312206 -n embed-certs-312206: exit status 2 (320.370918ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312206 -n embed-certs-312206
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312206 -n embed-certs-312206: exit status 2 (337.167982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-312206 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312206 -n embed-certs-312206
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312206 -n embed-certs-312206
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m9.951397347s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-981799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-981799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s6xc7" [655f3b30-2ba6-495d-bc5d-22963949f316] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s6xc7" [655f3b30-2ba6-495d-bc5d-22963949f316] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004223085s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-981799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-p65jf" [0bfb7bc3-2664-4093-b911-a6ac71612a30] Running
E0108 23:35:44.241606  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/functional-688728/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.025316074s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-981799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-981799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n8j66" [81d2acb1-9696-4982-8d90-bfd90ad83a6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n8j66" [81d2acb1-9696-4982-8d90-bfd90ad83a6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.074612919s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-981799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.258282295s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.556253871s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6m59v" [bc11d2a9-ca9b-42dc-9899-9f0937052192] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003682199s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6m59v" [bc11d2a9-ca9b-42dc-9899-9f0937052192] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004036561s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-259817 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-259817 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-259817 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-259817 -n old-k8s-version-259817
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-259817 -n old-k8s-version-259817: exit status 2 (429.715672ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-259817 -n old-k8s-version-259817
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-259817 -n old-k8s-version-259817: exit status 2 (404.796593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-259817 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-259817 -n old-k8s-version-259817
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-259817 -n old-k8s-version-259817
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m22.175923404s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8bz8p" [f3a913a8-1395-428e-9418-21f89419a271] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005706345s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mwpvr" [cf25e2ed-9cb6-4e42-bcbf-a05d9cef5a09] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004539322s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-981799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-981799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cgtrl" [42b3f7b4-4636-46b8-9c53-4f4deb514b7e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cgtrl" [42b3f7b4-4636-46b8-9c53-4f4deb514b7e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005511s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mwpvr" [cf25e2ed-9cb6-4e42-bcbf-a05d9cef5a09] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004230226s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-778350 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-981799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-981799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v2tch" [3f4caf5a-7ac4-4c17-a15c-7d703c938f03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v2tch" [3f4caf5a-7ac4-4c17-a15c-7d703c938f03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003439592s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-778350 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-778350 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-778350 -n default-k8s-diff-port-778350
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-778350 -n default-k8s-diff-port-778350: exit status 2 (337.982352ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-778350 -n default-k8s-diff-port-778350
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-778350 -n default-k8s-diff-port-778350: exit status 2 (332.999653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-778350 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-778350 -n default-k8s-diff-port-778350
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-778350 -n default-k8s-diff-port-778350
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)
E0108 23:37:47.479248  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/ingress-addon-legacy-713577/client.crt: no such file or directory
E0108 23:37:53.807639  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory
E0108 23:37:53.812937  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory
E0108 23:37:53.823355  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory
E0108 23:37:53.843654  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory
E0108 23:37:53.884046  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory
E0108 23:37:53.964488  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory
E0108 23:37:54.124936  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory
E0108 23:37:54.445512  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory
E0108 23:37:55.086138  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory
E0108 23:37:56.367225  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory
E0108 23:37:58.927406  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-981799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.273398474s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-981799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-981799 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m19.887511011s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-981799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-981799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cz8m7" [b23b0693-174e-43f7-a3aa-b381324e22a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 23:38:04.048146  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/no-preload-978791/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-cz8m7" [b23b0693-174e-43f7-a3aa-b381324e22a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003809282s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2k9q2" [f8b7841d-32e7-46f1-9e5d-1de1f17a2e2b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004752167s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-981799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-981799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-981799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-drxpp" [65c57bfe-19ed-485b-8393-796e1644250b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-drxpp" [65c57bfe-19ed-485b-8393-796e1644250b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003874732s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-981799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-981799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-981799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-57xg7" [8b8e35fc-5e9e-48ca-b47e-3463301e07f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 23:38:58.682760  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/old-k8s-version-259817/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-57xg7" [8b8e35fc-5e9e-48ca-b47e-3463301e07f5] Running
E0108 23:39:03.803724  328384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/old-k8s-version-259817/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003753271s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-981799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-981799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (27/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-404135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-404135
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-981799 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-981799" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-981799" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:26:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-804190
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:24:21 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-146344
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:26:21 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-056673
contexts:
- context:
cluster: cert-expiration-804190
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:26:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-804190
name: cert-expiration-804190
- context:
cluster: kubernetes-upgrade-146344
user: kubernetes-upgrade-146344
name: kubernetes-upgrade-146344
- context:
cluster: pause-056673
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:26:21 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-056673
name: pause-056673
current-context: cert-expiration-804190
kind: Config
preferences: {}
users:
- name: cert-expiration-804190
user:
client-certificate: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/cert-expiration-804190/client.crt
client-key: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/cert-expiration-804190/client.key
- name: kubernetes-upgrade-146344
user:
client-certificate: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/kubernetes-upgrade-146344/client.crt
client-key: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/kubernetes-upgrade-146344/client.key
- name: pause-056673
user:
client-certificate: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/pause-056673/client.crt
client-key: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/pause-056673/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-981799

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-981799"

                                                
                                                
----------------------- debugLogs end: kubenet-981799 [took: 4.039893673s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-981799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-981799
--- SKIP: TestNetworkPlugins/group/kubenet (4.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-981799 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-981799" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-321683/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:24:21 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-146344
contexts:
- context:
cluster: kubernetes-upgrade-146344
user: kubernetes-upgrade-146344
name: kubernetes-upgrade-146344
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-146344
user:
client-certificate: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/kubernetes-upgrade-146344/client.crt
client-key: /home/jenkins/minikube-integration/17830-321683/.minikube/profiles/kubernetes-upgrade-146344/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-981799

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-981799" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-981799"

                                                
                                                
----------------------- debugLogs end: cilium-981799 [took: 4.027906664s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-981799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-981799
--- SKIP: TestNetworkPlugins/group/cilium (4.20s)

                                                
                                    
Copied to clipboard