Test Report: Docker_Linux_crio 17363

                    
                      9401f4c578044658a0ecc50e70738aa1fc99eff9:2023-10-05:31314
                    
                

Test fail (6/307)

Order failed test Duration
28 TestAddons/parallel/Ingress 154.98
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 11.73
158 TestIngressAddonLegacy/serial/ValidateIngressAddons 185.02
208 TestMultiNode/serial/PingHostFrom2Pods 3.66
229 TestRunningBinaryUpgrade 78.98
237 TestStoppedBinaryUpgrade/Upgrade 83.86
x
+
TestAddons/parallel/Ingress (154.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-029116 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:205: (dbg) Done: kubectl --context addons-029116 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (1.092676992s)
addons_test.go:230: (dbg) Run:  kubectl --context addons-029116 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-029116 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [929b3d34-c42a-498f-b126-4135e7640a07] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [929b3d34-c42a-498f-b126-4135e7640a07] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.011170669s
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-029116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.846556498s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-029116 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-029116 addons disable ingress-dns --alsologtostderr -v=1: (1.636807222s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-029116 addons disable ingress --alsologtostderr -v=1: (7.699248506s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-029116
helpers_test.go:235: (dbg) docker inspect addons-029116:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2c3948a49fe680f76f54f83aacf9d1ad9143b8e294a70ab10e1971a39f098e22",
	        "Created": "2023-10-05T20:03:35.316246106Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342495,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T20:03:35.627464129Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/2c3948a49fe680f76f54f83aacf9d1ad9143b8e294a70ab10e1971a39f098e22/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c3948a49fe680f76f54f83aacf9d1ad9143b8e294a70ab10e1971a39f098e22/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c3948a49fe680f76f54f83aacf9d1ad9143b8e294a70ab10e1971a39f098e22/hosts",
	        "LogPath": "/var/lib/docker/containers/2c3948a49fe680f76f54f83aacf9d1ad9143b8e294a70ab10e1971a39f098e22/2c3948a49fe680f76f54f83aacf9d1ad9143b8e294a70ab10e1971a39f098e22-json.log",
	        "Name": "/addons-029116",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-029116:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-029116",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5921fff43a8fe0ab0d6090863f6a9f0bd67f5c94dbde75cd23fc757f84e51cdd-init/diff:/var/lib/docker/overlay2/a21dd10b1c0943795b4df336c5f708b264590966562c18c6ecb8b8c4ccc3838e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5921fff43a8fe0ab0d6090863f6a9f0bd67f5c94dbde75cd23fc757f84e51cdd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5921fff43a8fe0ab0d6090863f6a9f0bd67f5c94dbde75cd23fc757f84e51cdd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5921fff43a8fe0ab0d6090863f6a9f0bd67f5c94dbde75cd23fc757f84e51cdd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-029116",
	                "Source": "/var/lib/docker/volumes/addons-029116/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-029116",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-029116",
	                "name.minikube.sigs.k8s.io": "addons-029116",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79a0622625c8017c13b58ca4e916186c3e81160bc9a29e80a0eee724f0356e60",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/79a0622625c8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-029116": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2c3948a49fe6",
	                        "addons-029116"
	                    ],
	                    "NetworkID": "1e8795564219f2e37aa38088e63b11139f1407c4d93206fb32848a312c67e457",
	                    "EndpointID": "ba2e0ef0f26e7800a486ea409f55eb75bf805866852035222180314b9cdd83a6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-029116 -n addons-029116
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-029116 logs -n 25: (1.301227021s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC | 05 Oct 23 20:03 UTC |
	| delete  | -p download-only-096441                                                                     | download-only-096441   | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC | 05 Oct 23 20:03 UTC |
	| delete  | -p download-only-096441                                                                     | download-only-096441   | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC | 05 Oct 23 20:03 UTC |
	| start   | --download-only -p                                                                          | download-docker-170726 | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC |                     |
	|         | download-docker-170726                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-170726                                                                   | download-docker-170726 | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC | 05 Oct 23 20:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-924260   | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC |                     |
	|         | binary-mirror-924260                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42053                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-924260                                                                     | binary-mirror-924260   | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC | 05 Oct 23 20:03 UTC |
	| addons  | disable dashboard -p                                                                        | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC |                     |
	|         | addons-029116                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC |                     |
	|         | addons-029116                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-029116 --wait=true                                                                | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC | 05 Oct 23 20:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:05 UTC | 05 Oct 23 20:05 UTC |
	|         | addons-029116                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-029116 ssh cat                                                                       | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:05 UTC | 05 Oct 23 20:05 UTC |
	|         | /opt/local-path-provisioner/pvc-4b90cbef-9395-48c7-bf53-d29fd7509af3_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-029116 addons disable                                                                | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:05 UTC | 05 Oct 23 20:05 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-029116 ssh curl -s                                                                   | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-029116 ip                                                                            | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:05 UTC | 05 Oct 23 20:05 UTC |
	| addons  | addons-029116 addons disable                                                                | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:05 UTC | 05 Oct 23 20:05 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:05 UTC | 05 Oct 23 20:05 UTC |
	|         | -p addons-029116                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:05 UTC | 05 Oct 23 20:05 UTC |
	|         | addons-029116                                                                               |                        |         |         |                     |                     |
	| addons  | addons-029116 addons disable                                                                | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:05 UTC | 05 Oct 23 20:05 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-029116 addons                                                                        | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:05 UTC | 05 Oct 23 20:05 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-029116 addons                                                                        | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:06 UTC | 05 Oct 23 20:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-029116 addons                                                                        | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:07 UTC | 05 Oct 23 20:07 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-029116 ip                                                                            | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:07 UTC | 05 Oct 23 20:07 UTC |
	| addons  | addons-029116 addons disable                                                                | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:07 UTC | 05 Oct 23 20:07 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-029116 addons disable                                                                | addons-029116          | jenkins | v1.31.2 | 05 Oct 23 20:07 UTC | 05 Oct 23 20:07 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:03:10
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:03:10.923263  341844 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:03:10.923711  341844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:03:10.923728  341844 out.go:309] Setting ErrFile to fd 2...
	I1005 20:03:10.923736  341844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:03:10.924239  341844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	I1005 20:03:10.925585  341844 out.go:303] Setting JSON to false
	I1005 20:03:10.926526  341844 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6320,"bootTime":1696529871,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:03:10.926608  341844 start.go:138] virtualization: kvm guest
	I1005 20:03:10.928874  341844 out.go:177] * [addons-029116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:03:10.930442  341844 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:03:10.930417  341844 notify.go:220] Checking for updates...
	I1005 20:03:10.932042  341844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:03:10.933707  341844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:03:10.935256  341844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	I1005 20:03:10.936716  341844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:03:10.938204  341844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:03:10.939838  341844 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:03:10.963266  341844 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:03:10.963393  341844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:03:11.018809  341844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-10-05 20:03:11.009222089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:03:11.018970  341844 docker.go:294] overlay module found
	I1005 20:03:11.021028  341844 out.go:177] * Using the docker driver based on user configuration
	I1005 20:03:11.022429  341844 start.go:298] selected driver: docker
	I1005 20:03:11.022445  341844 start.go:902] validating driver "docker" against <nil>
	I1005 20:03:11.022458  341844 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:03:11.023512  341844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:03:11.078082  341844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-10-05 20:03:11.068905948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:03:11.078293  341844 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 20:03:11.078513  341844 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1005 20:03:11.080395  341844 out.go:177] * Using Docker driver with root privileges
	I1005 20:03:11.081738  341844 cni.go:84] Creating CNI manager for ""
	I1005 20:03:11.081762  341844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 20:03:11.081777  341844 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 20:03:11.081793  341844 start_flags.go:321] config:
	{Name:addons-029116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-029116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:03:11.083533  341844 out.go:177] * Starting control plane node addons-029116 in cluster addons-029116
	I1005 20:03:11.084826  341844 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 20:03:11.086267  341844 out.go:177] * Pulling base image ...
	I1005 20:03:11.087667  341844 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 20:03:11.087719  341844 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1005 20:03:11.087720  341844 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:03:11.087742  341844 cache.go:57] Caching tarball of preloaded images
	I1005 20:03:11.087844  341844 preload.go:174] Found /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1005 20:03:11.087857  341844 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1005 20:03:11.088227  341844 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/config.json ...
	I1005 20:03:11.088254  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/config.json: {Name:mk87917185645386657f9425812bafbb78732a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:11.105554  341844 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 20:03:11.105706  341844 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1005 20:03:11.105731  341844 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1005 20:03:11.105741  341844 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1005 20:03:11.105751  341844 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1005 20:03:11.105759  341844 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from local cache
	I1005 20:03:22.383877  341844 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from cached tarball
	I1005 20:03:22.383926  341844 cache.go:195] Successfully downloaded all kic artifacts
	I1005 20:03:22.384000  341844 start.go:365] acquiring machines lock for addons-029116: {Name:mkea0dff84a52b0b397afdb723c797d099c3aca1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:03:22.384121  341844 start.go:369] acquired machines lock for "addons-029116" in 98.951µs
	I1005 20:03:22.384149  341844 start.go:93] Provisioning new machine with config: &{Name:addons-029116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-029116 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 20:03:22.384254  341844 start.go:125] createHost starting for "" (driver="docker")
	I1005 20:03:22.386505  341844 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1005 20:03:22.386802  341844 start.go:159] libmachine.API.Create for "addons-029116" (driver="docker")
	I1005 20:03:22.386868  341844 client.go:168] LocalClient.Create starting
	I1005 20:03:22.386998  341844 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem
	I1005 20:03:22.501616  341844 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem
	I1005 20:03:22.646142  341844 cli_runner.go:164] Run: docker network inspect addons-029116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 20:03:22.663975  341844 cli_runner.go:211] docker network inspect addons-029116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 20:03:22.664100  341844 network_create.go:281] running [docker network inspect addons-029116] to gather additional debugging logs...
	I1005 20:03:22.664131  341844 cli_runner.go:164] Run: docker network inspect addons-029116
	W1005 20:03:22.681887  341844 cli_runner.go:211] docker network inspect addons-029116 returned with exit code 1
	I1005 20:03:22.681948  341844 network_create.go:284] error running [docker network inspect addons-029116]: docker network inspect addons-029116: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-029116 not found
	I1005 20:03:22.681969  341844 network_create.go:286] output of [docker network inspect addons-029116]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-029116 not found
	
	** /stderr **
	I1005 20:03:22.682164  341844 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:03:22.699808  341844 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001346900}
	I1005 20:03:22.699848  341844 network_create.go:124] attempt to create docker network addons-029116 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1005 20:03:22.699904  341844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-029116 addons-029116
	I1005 20:03:22.759441  341844 network_create.go:108] docker network addons-029116 192.168.49.0/24 created
	I1005 20:03:22.759478  341844 kic.go:117] calculated static IP "192.168.49.2" for the "addons-029116" container
	I1005 20:03:22.759571  341844 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 20:03:22.779005  341844 cli_runner.go:164] Run: docker volume create addons-029116 --label name.minikube.sigs.k8s.io=addons-029116 --label created_by.minikube.sigs.k8s.io=true
	I1005 20:03:22.798307  341844 oci.go:103] Successfully created a docker volume addons-029116
	I1005 20:03:22.798442  341844 cli_runner.go:164] Run: docker run --rm --name addons-029116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-029116 --entrypoint /usr/bin/test -v addons-029116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 20:03:29.972611  341844 cli_runner.go:217] Completed: docker run --rm --name addons-029116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-029116 --entrypoint /usr/bin/test -v addons-029116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (7.174106179s)
	I1005 20:03:29.972653  341844 oci.go:107] Successfully prepared a docker volume addons-029116
	I1005 20:03:29.972676  341844 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 20:03:29.972699  341844 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 20:03:29.972766  341844 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-029116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 20:03:35.242126  341844 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-029116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (5.269277854s)
	I1005 20:03:35.242164  341844 kic.go:199] duration metric: took 5.269461 seconds to extract preloaded images to volume
	W1005 20:03:35.242346  341844 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 20:03:35.242522  341844 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 20:03:35.299687  341844 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-029116 --name addons-029116 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-029116 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-029116 --network addons-029116 --ip 192.168.49.2 --volume addons-029116:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 20:03:35.635975  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Running}}
	I1005 20:03:35.654934  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:03:35.675101  341844 cli_runner.go:164] Run: docker exec addons-029116 stat /var/lib/dpkg/alternatives/iptables
	I1005 20:03:35.717591  341844 oci.go:144] the created container "addons-029116" has a running status.
	I1005 20:03:35.717624  341844 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa...
	I1005 20:03:35.901066  341844 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 20:03:35.927271  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:03:35.946060  341844 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 20:03:35.946104  341844 kic_runner.go:114] Args: [docker exec --privileged addons-029116 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 20:03:36.025756  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:03:36.045573  341844 machine.go:88] provisioning docker machine ...
	I1005 20:03:36.045652  341844 ubuntu.go:169] provisioning hostname "addons-029116"
	I1005 20:03:36.045723  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:03:36.066632  341844 main.go:141] libmachine: Using SSH client type: native
	I1005 20:03:36.067176  341844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1005 20:03:36.067211  341844 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-029116 && echo "addons-029116" | sudo tee /etc/hostname
	I1005 20:03:36.068188  341844 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56290->127.0.0.1:33074: read: connection reset by peer
	I1005 20:03:39.215674  341844 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-029116
	
	I1005 20:03:39.215765  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:03:39.234024  341844 main.go:141] libmachine: Using SSH client type: native
	I1005 20:03:39.234393  341844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1005 20:03:39.234413  341844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-029116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-029116/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-029116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 20:03:39.372058  341844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:03:39.372107  341844 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-334135/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-334135/.minikube}
	I1005 20:03:39.372159  341844 ubuntu.go:177] setting up certificates
	I1005 20:03:39.372177  341844 provision.go:83] configureAuth start
	I1005 20:03:39.372252  341844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-029116
	I1005 20:03:39.390674  341844 provision.go:138] copyHostCerts
	I1005 20:03:39.391214  341844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem (1123 bytes)
	I1005 20:03:39.391445  341844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem (1675 bytes)
	I1005 20:03:39.391539  341844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem (1078 bytes)
	I1005 20:03:39.391610  341844 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem org=jenkins.addons-029116 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-029116]
	I1005 20:03:39.554004  341844 provision.go:172] copyRemoteCerts
	I1005 20:03:39.554102  341844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 20:03:39.554143  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:03:39.572340  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:03:39.668814  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1005 20:03:39.693906  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1005 20:03:39.718506  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1005 20:03:39.743351  341844 provision.go:86] duration metric: configureAuth took 371.153636ms
	I1005 20:03:39.743386  341844 ubuntu.go:193] setting minikube options for container-runtime
	I1005 20:03:39.743617  341844 config.go:182] Loaded profile config "addons-029116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:03:39.743750  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:03:39.762027  341844 main.go:141] libmachine: Using SSH client type: native
	I1005 20:03:39.762438  341844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1005 20:03:39.762457  341844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 20:03:40.000520  341844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 20:03:40.000549  341844 machine.go:91] provisioned docker machine in 3.954953127s
	I1005 20:03:40.000559  341844 client.go:171] LocalClient.Create took 17.613678048s
	I1005 20:03:40.000584  341844 start.go:167] duration metric: libmachine.API.Create for "addons-029116" took 17.613783668s
	I1005 20:03:40.000598  341844 start.go:300] post-start starting for "addons-029116" (driver="docker")
	I1005 20:03:40.000615  341844 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 20:03:40.000682  341844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 20:03:40.000740  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:03:40.018717  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:03:40.116960  341844 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 20:03:40.120602  341844 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 20:03:40.120650  341844 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 20:03:40.120665  341844 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 20:03:40.120677  341844 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 20:03:40.120728  341844 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/addons for local assets ...
	I1005 20:03:40.120813  341844 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/files for local assets ...
	I1005 20:03:40.120853  341844 start.go:303] post-start completed in 120.237276ms
	I1005 20:03:40.121262  341844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-029116
	I1005 20:03:40.139143  341844 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/config.json ...
	I1005 20:03:40.139420  341844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:03:40.139467  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:03:40.157678  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:03:40.252470  341844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 20:03:40.257377  341844 start.go:128] duration metric: createHost completed in 17.873105182s
	I1005 20:03:40.257414  341844 start.go:83] releasing machines lock for "addons-029116", held for 17.873279281s
	I1005 20:03:40.257515  341844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-029116
	I1005 20:03:40.275462  341844 ssh_runner.go:195] Run: cat /version.json
	I1005 20:03:40.275517  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:03:40.275529  341844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 20:03:40.275589  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:03:40.294363  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:03:40.294413  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:03:40.480432  341844 ssh_runner.go:195] Run: systemctl --version
	I1005 20:03:40.485047  341844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 20:03:40.625883  341844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 20:03:40.630746  341844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:03:40.651786  341844 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 20:03:40.651865  341844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:03:40.682622  341844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1005 20:03:40.682646  341844 start.go:469] detecting cgroup driver to use...
	I1005 20:03:40.682683  341844 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:03:40.682741  341844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 20:03:40.699573  341844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 20:03:40.711183  341844 docker.go:197] disabling cri-docker service (if available) ...
	I1005 20:03:40.711255  341844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 20:03:40.725383  341844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 20:03:40.739491  341844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 20:03:40.822392  341844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 20:03:40.901643  341844 docker.go:213] disabling docker service ...
	I1005 20:03:40.901705  341844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 20:03:40.921881  341844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 20:03:40.934380  341844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 20:03:41.014325  341844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 20:03:41.102902  341844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 20:03:41.115219  341844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:03:41.132097  341844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1005 20:03:41.132186  341844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:03:41.142856  341844 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1005 20:03:41.142932  341844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:03:41.153626  341844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:03:41.164035  341844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:03:41.174528  341844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 20:03:41.184321  341844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 20:03:41.193258  341844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 20:03:41.202190  341844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:03:41.277544  341844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1005 20:03:41.380778  341844 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1005 20:03:41.380908  341844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1005 20:03:41.384721  341844 start.go:537] Will wait 60s for crictl version
	I1005 20:03:41.384779  341844 ssh_runner.go:195] Run: which crictl
	I1005 20:03:41.388589  341844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 20:03:41.426684  341844 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1005 20:03:41.426815  341844 ssh_runner.go:195] Run: crio --version
	I1005 20:03:41.465802  341844 ssh_runner.go:195] Run: crio --version
	I1005 20:03:41.504721  341844 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1005 20:03:41.506251  341844 cli_runner.go:164] Run: docker network inspect addons-029116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:03:41.524458  341844 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1005 20:03:41.528484  341844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:03:41.540261  341844 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 20:03:41.540337  341844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 20:03:41.595850  341844 crio.go:496] all images are preloaded for cri-o runtime.
	I1005 20:03:41.595883  341844 crio.go:415] Images already preloaded, skipping extraction
	I1005 20:03:41.595942  341844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 20:03:41.630819  341844 crio.go:496] all images are preloaded for cri-o runtime.
	I1005 20:03:41.630843  341844 cache_images.go:84] Images are preloaded, skipping loading
	I1005 20:03:41.630903  341844 ssh_runner.go:195] Run: crio config
	I1005 20:03:41.675432  341844 cni.go:84] Creating CNI manager for ""
	I1005 20:03:41.675456  341844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 20:03:41.675483  341844 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 20:03:41.675512  341844 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-029116 NodeName:addons-029116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 20:03:41.675676  341844 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-029116"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 20:03:41.675816  341844 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-029116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-029116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 20:03:41.675895  341844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 20:03:41.684971  341844 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 20:03:41.685043  341844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 20:03:41.694335  341844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1005 20:03:41.712698  341844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 20:03:41.731662  341844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1005 20:03:41.750292  341844 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1005 20:03:41.754205  341844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:03:41.765647  341844 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116 for IP: 192.168.49.2
	I1005 20:03:41.765690  341844 certs.go:190] acquiring lock for shared ca certs: {Name:mk1be6ef34f8fc4cfa2ec636f9e6906c15e2096a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:41.765835  341844 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key
	I1005 20:03:41.916012  341844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt ...
	I1005 20:03:41.916048  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt: {Name:mk5eb0a534b69ae77fa03ac4c62cbfa2c0df23a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:41.916262  341844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key ...
	I1005 20:03:41.916273  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key: {Name:mke33a1ee8672ca4b446cc56d81b977f865ff8c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:41.916341  341844 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key
	I1005 20:03:41.988699  341844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.crt ...
	I1005 20:03:41.988737  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.crt: {Name:mk4db132c848fbe296ecf62567e08fba56a1b2dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:41.988929  341844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key ...
	I1005 20:03:41.988940  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key: {Name:mk7985ddbb245f5c9346d746f90b454948771bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:41.989053  341844 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.key
	I1005 20:03:41.989075  341844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt with IP's: []
	I1005 20:03:42.068119  341844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt ...
	I1005 20:03:42.068162  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: {Name:mk648999d40b665a8d479bb24d942d3242cf9df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:42.068351  341844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.key ...
	I1005 20:03:42.068362  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.key: {Name:mkf02683df885458695afdabfcb4d667cf9568a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:42.068431  341844 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.key.dd3b5fb2
	I1005 20:03:42.068450  341844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1005 20:03:42.156403  341844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.crt.dd3b5fb2 ...
	I1005 20:03:42.156440  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.crt.dd3b5fb2: {Name:mkb6b7749631ca232d57cd7072bc08dead1b8965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:42.156615  341844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.key.dd3b5fb2 ...
	I1005 20:03:42.156627  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.key.dd3b5fb2: {Name:mk7fb7704c582c0111fbdca12eb10ff71ce17f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:42.156705  341844 certs.go:337] copying /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.crt
	I1005 20:03:42.156774  341844 certs.go:341] copying /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.key
	I1005 20:03:42.156821  341844 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/proxy-client.key
	I1005 20:03:42.156838  341844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/proxy-client.crt with IP's: []
	I1005 20:03:42.457404  341844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/proxy-client.crt ...
	I1005 20:03:42.457442  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/proxy-client.crt: {Name:mkd23423cf07cfaae5c95f76c7a004337fa0c144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:42.457657  341844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/proxy-client.key ...
	I1005 20:03:42.457671  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/proxy-client.key: {Name:mk8e630e72069d7295225b65f84c46b55a6e4dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:42.457872  341844 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 20:03:42.457914  341844 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem (1078 bytes)
	I1005 20:03:42.457938  341844 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem (1123 bytes)
	I1005 20:03:42.457969  341844 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem (1675 bytes)
	I1005 20:03:42.458657  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 20:03:42.484042  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1005 20:03:42.509701  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 20:03:42.535109  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1005 20:03:42.560281  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 20:03:42.584988  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1005 20:03:42.609821  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 20:03:42.634491  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 20:03:42.659254  341844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 20:03:42.684290  341844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 20:03:42.702931  341844 ssh_runner.go:195] Run: openssl version
	I1005 20:03:42.708649  341844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 20:03:42.719052  341844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:03:42.723241  341844 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:03:42.723316  341844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:03:42.730467  341844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 20:03:42.740597  341844 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 20:03:42.744429  341844 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 20:03:42.744497  341844 kubeadm.go:404] StartCluster: {Name:addons-029116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-029116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:03:42.744590  341844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1005 20:03:42.744640  341844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 20:03:42.782568  341844 cri.go:89] found id: ""
	I1005 20:03:42.782679  341844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 20:03:42.792234  341844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 20:03:42.802273  341844 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1005 20:03:42.802355  341844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 20:03:42.811625  341844 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 20:03:42.811703  341844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1005 20:03:42.860398  341844 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1005 20:03:42.860507  341844 kubeadm.go:322] [preflight] Running pre-flight checks
	I1005 20:03:42.900256  341844 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1005 20:03:42.900387  341844 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-gcp
	I1005 20:03:42.900480  341844 kubeadm.go:322] OS: Linux
	I1005 20:03:42.900539  341844 kubeadm.go:322] CGROUPS_CPU: enabled
	I1005 20:03:42.900606  341844 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1005 20:03:42.900677  341844 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1005 20:03:42.900749  341844 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1005 20:03:42.900819  341844 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1005 20:03:42.900895  341844 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1005 20:03:42.900964  341844 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1005 20:03:42.901037  341844 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1005 20:03:42.901088  341844 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1005 20:03:42.969009  341844 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 20:03:42.969163  341844 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 20:03:42.969317  341844 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 20:03:43.181258  341844 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 20:03:43.184491  341844 out.go:204]   - Generating certificates and keys ...
	I1005 20:03:43.184652  341844 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1005 20:03:43.184780  341844 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1005 20:03:43.299156  341844 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 20:03:43.384536  341844 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1005 20:03:43.475271  341844 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1005 20:03:43.728548  341844 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1005 20:03:43.791203  341844 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1005 20:03:43.791356  341844 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-029116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 20:03:43.990462  341844 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1005 20:03:43.990598  341844 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-029116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 20:03:44.110055  341844 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 20:03:44.330944  341844 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 20:03:44.430208  341844 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1005 20:03:44.430333  341844 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 20:03:44.615543  341844 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 20:03:44.785792  341844 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 20:03:44.935189  341844 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 20:03:45.002202  341844 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 20:03:45.002602  341844 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 20:03:45.005207  341844 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 20:03:45.007731  341844 out.go:204]   - Booting up control plane ...
	I1005 20:03:45.007940  341844 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 20:03:45.008052  341844 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 20:03:45.008819  341844 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 20:03:45.018254  341844 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 20:03:45.019099  341844 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 20:03:45.019162  341844 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1005 20:03:45.099948  341844 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 20:03:50.102585  341844 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002759 seconds
	I1005 20:03:50.102707  341844 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 20:03:50.116977  341844 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 20:03:50.643133  341844 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1005 20:03:50.643388  341844 kubeadm.go:322] [mark-control-plane] Marking the node addons-029116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1005 20:03:51.153885  341844 kubeadm.go:322] [bootstrap-token] Using token: h2izsy.1aastj95skb9rb0b
	I1005 20:03:51.155698  341844 out.go:204]   - Configuring RBAC rules ...
	I1005 20:03:51.155860  341844 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 20:03:51.160917  341844 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 20:03:51.169169  341844 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 20:03:51.172845  341844 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 20:03:51.176664  341844 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 20:03:51.181759  341844 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 20:03:51.196720  341844 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 20:03:51.412297  341844 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1005 20:03:51.642142  341844 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1005 20:03:51.643214  341844 kubeadm.go:322] 
	I1005 20:03:51.643329  341844 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1005 20:03:51.643346  341844 kubeadm.go:322] 
	I1005 20:03:51.643442  341844 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1005 20:03:51.643454  341844 kubeadm.go:322] 
	I1005 20:03:51.643489  341844 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1005 20:03:51.643563  341844 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 20:03:51.643636  341844 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 20:03:51.643644  341844 kubeadm.go:322] 
	I1005 20:03:51.643716  341844 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1005 20:03:51.643725  341844 kubeadm.go:322] 
	I1005 20:03:51.643784  341844 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1005 20:03:51.643794  341844 kubeadm.go:322] 
	I1005 20:03:51.643870  341844 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1005 20:03:51.643967  341844 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 20:03:51.644057  341844 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 20:03:51.644067  341844 kubeadm.go:322] 
	I1005 20:03:51.644167  341844 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1005 20:03:51.644266  341844 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1005 20:03:51.644281  341844 kubeadm.go:322] 
	I1005 20:03:51.644387  341844 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token h2izsy.1aastj95skb9rb0b \
	I1005 20:03:51.644519  341844 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:af54c40b34df9aa62a3cf1403ac0941464ca2ce3fa61291d1928dbb7869129bb \
	I1005 20:03:51.644552  341844 kubeadm.go:322] 	--control-plane 
	I1005 20:03:51.644562  341844 kubeadm.go:322] 
	I1005 20:03:51.644671  341844 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1005 20:03:51.644682  341844 kubeadm.go:322] 
	I1005 20:03:51.644783  341844 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token h2izsy.1aastj95skb9rb0b \
	I1005 20:03:51.644917  341844 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:af54c40b34df9aa62a3cf1403ac0941464ca2ce3fa61291d1928dbb7869129bb 
	I1005 20:03:51.724031  341844 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-gcp\n", err: exit status 1
	I1005 20:03:51.724187  341844 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 20:03:51.724223  341844 cni.go:84] Creating CNI manager for ""
	I1005 20:03:51.724239  341844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 20:03:51.726480  341844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1005 20:03:51.727941  341844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 20:03:51.732780  341844 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1005 20:03:51.732809  341844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 20:03:51.752675  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 20:03:52.446658  341844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 20:03:52.446759  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=addons-029116 minikube.k8s.io/updated_at=2023_10_05T20_03_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:52.446766  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:52.455034  341844 ops.go:34] apiserver oom_adj: -16
	I1005 20:03:52.534369  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:52.727568  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:53.302915  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:53.803203  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:54.302836  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:54.802610  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:55.302935  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:55.802962  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:56.302627  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:56.802484  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:57.303363  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:57.802606  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:58.302819  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:58.803415  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:59.302392  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:03:59.803324  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:00.303135  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:00.803299  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:01.302453  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:01.803224  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:02.303209  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:02.803202  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:03.302480  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:03.803279  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:04.302465  341844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:04.472172  341844 kubeadm.go:1081] duration metric: took 12.02548897s to wait for elevateKubeSystemPrivileges.
	I1005 20:04:04.472218  341844 kubeadm.go:406] StartCluster complete in 21.727730191s
	I1005 20:04:04.472239  341844 settings.go:142] acquiring lock: {Name:mk6ed3422387c6b56e20ba6eb900649f1c8038d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:04.472351  341844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:04:04.472888  341844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/kubeconfig: {Name:mk99d37d95bb8af0e1f4fc14f039efe68f627fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:04.474766  341844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 20:04:04.474775  341844 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1005 20:04:04.474851  341844 addons.go:69] Setting volumesnapshots=true in profile "addons-029116"
	I1005 20:04:04.474898  341844 addons.go:69] Setting cloud-spanner=true in profile "addons-029116"
	I1005 20:04:04.474914  341844 addons.go:231] Setting addon volumesnapshots=true in "addons-029116"
	I1005 20:04:04.474924  341844 addons.go:231] Setting addon cloud-spanner=true in "addons-029116"
	I1005 20:04:04.474966  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.474977  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.474976  341844 addons.go:69] Setting default-storageclass=true in profile "addons-029116"
	I1005 20:04:04.475002  341844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-029116"
	I1005 20:04:04.475125  341844 config.go:182] Loaded profile config "addons-029116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:04:04.475171  341844 addons.go:69] Setting inspektor-gadget=true in profile "addons-029116"
	I1005 20:04:04.475184  341844 addons.go:231] Setting addon inspektor-gadget=true in "addons-029116"
	I1005 20:04:04.475229  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.475298  341844 addons.go:69] Setting registry=true in profile "addons-029116"
	I1005 20:04:04.475321  341844 addons.go:231] Setting addon registry=true in "addons-029116"
	I1005 20:04:04.475359  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.475361  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.475506  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.475494  341844 addons.go:69] Setting storage-provisioner=true in profile "addons-029116"
	I1005 20:04:04.475515  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.475523  341844 addons.go:231] Setting addon storage-provisioner=true in "addons-029116"
	I1005 20:04:04.475560  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.475588  341844 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-029116"
	I1005 20:04:04.475613  341844 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-029116"
	I1005 20:04:04.475635  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.475780  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.475861  341844 addons.go:69] Setting ingress=true in profile "addons-029116"
	I1005 20:04:04.475887  341844 addons.go:231] Setting addon ingress=true in "addons-029116"
	I1005 20:04:04.475894  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.475950  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.476027  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.476359  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.474858  341844 addons.go:69] Setting gcp-auth=true in profile "addons-029116"
	I1005 20:04:04.476512  341844 mustload.go:65] Loading cluster: addons-029116
	I1005 20:04:04.476538  341844 addons.go:69] Setting metrics-server=true in profile "addons-029116"
	I1005 20:04:04.476573  341844 addons.go:231] Setting addon metrics-server=true in "addons-029116"
	I1005 20:04:04.476621  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.476674  341844 addons.go:69] Setting ingress-dns=true in profile "addons-029116"
	I1005 20:04:04.476695  341844 addons.go:231] Setting addon ingress-dns=true in "addons-029116"
	I1005 20:04:04.476713  341844 config.go:182] Loaded profile config "addons-029116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:04:04.476746  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.476955  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.477072  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.477242  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.477330  341844 addons.go:69] Setting helm-tiller=true in profile "addons-029116"
	I1005 20:04:04.477356  341844 addons.go:231] Setting addon helm-tiller=true in "addons-029116"
	I1005 20:04:04.477406  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.474887  341844 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-029116"
	I1005 20:04:04.481333  341844 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-029116"
	I1005 20:04:04.481448  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.482012  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.503734  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.506105  341844 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 20:04:04.506668  341844 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-029116"
	I1005 20:04:04.509682  341844 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 20:04:04.508412  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.508433  341844 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1005 20:04:04.508482  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.511476  341844 addons.go:231] Setting addon default-storageclass=true in "addons-029116"
	I1005 20:04:04.511677  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.513362  341844 out.go:177]   - Using image docker.io/registry:2.8.1
	I1005 20:04:04.513586  341844 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1005 20:04:04.513462  341844 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1005 20:04:04.513630  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:04.513663  341844 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.1
	I1005 20:04:04.513397  341844 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1005 20:04:04.516828  341844 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1005 20:04:04.516852  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1005 20:04:04.516921  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.516920  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:04.516210  341844 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1005 20:04:04.516231  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1005 20:04:04.518810  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.518879  341844 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1005 20:04:04.520048  341844 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1005 20:04:04.520793  341844 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1005 20:04:04.520814  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1005 20:04:04.520410  341844 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1005 20:04:04.521948  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1005 20:04:04.522032  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.522251  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1005 20:04:04.522312  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.522482  341844 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1005 20:04:04.522502  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.523992  341844 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1005 20:04:04.525313  341844 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1005 20:04:04.527198  341844 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1005 20:04:04.528528  341844 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1005 20:04:04.529892  341844 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1005 20:04:04.531249  341844 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1005 20:04:04.532537  341844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:04:04.533861  341844 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:04:04.533892  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 20:04:04.533971  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.532456  341844 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1005 20:04:04.534255  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1005 20:04:04.534313  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.544948  341844 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1005 20:04:04.547295  341844 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1005 20:04:04.547327  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1005 20:04:04.547406  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.556900  341844 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1005 20:04:04.558232  341844 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1005 20:04:04.558261  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1005 20:04:04.558343  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.560964  341844 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1005 20:04:04.562526  341844 out.go:177]   - Using image docker.io/busybox:stable
	I1005 20:04:04.564296  341844 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1005 20:04:04.564326  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1005 20:04:04.564400  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.566234  341844 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1005 20:04:04.567788  341844 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1005 20:04:04.566259  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:04.564936  341844 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 20:04:04.567825  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 20:04:04.567825  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1005 20:04:04.567892  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.567892  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:04.577804  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:04.584705  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:04.589253  341844 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-029116" context rescaled to 1 replicas
	I1005 20:04:04.589298  341844 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 20:04:04.590544  341844 out.go:177] * Verifying Kubernetes components...
	I1005 20:04:04.592291  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:04.592371  341844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:04:04.606167  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:04.608609  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:04.614225  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:04.616247  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:04.616459  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:04.617700  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:04.635273  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:04.637495  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	W1005 20:04:04.644917  341844 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1005 20:04:04.644955  341844 retry.go:31] will retry after 176.510518ms: ssh: handshake failed: EOF
	I1005 20:04:04.837748  341844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1005 20:04:04.838866  341844 node_ready.go:35] waiting up to 6m0s for node "addons-029116" to be "Ready" ...
	I1005 20:04:04.937814  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1005 20:04:05.028174  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:04:05.035219  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1005 20:04:05.040998  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1005 20:04:05.125420  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1005 20:04:05.129161  341844 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1005 20:04:05.129262  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1005 20:04:05.219916  341844 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1005 20:04:05.220023  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1005 20:04:05.221266  341844 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1005 20:04:05.221349  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1005 20:04:05.236248  341844 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1005 20:04:05.236280  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1005 20:04:05.321088  341844 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1005 20:04:05.321183  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1005 20:04:05.321778  341844 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1005 20:04:05.321859  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1005 20:04:05.327871  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 20:04:05.337010  341844 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1005 20:04:05.337099  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1005 20:04:05.440166  341844 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1005 20:04:05.440262  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1005 20:04:05.521848  341844 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1005 20:04:05.521945  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1005 20:04:05.529238  341844 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1005 20:04:05.529338  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1005 20:04:05.620842  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1005 20:04:05.624850  341844 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1005 20:04:05.624885  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1005 20:04:05.639183  341844 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1005 20:04:05.639216  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1005 20:04:05.739003  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1005 20:04:05.836644  341844 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 20:04:05.836748  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1005 20:04:05.925287  341844 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1005 20:04:05.926024  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1005 20:04:05.930834  341844 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1005 20:04:05.930878  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1005 20:04:06.020297  341844 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1005 20:04:06.020355  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1005 20:04:06.133803  341844 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1005 20:04:06.133898  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1005 20:04:06.326196  341844 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1005 20:04:06.326315  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1005 20:04:06.337992  341844 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1005 20:04:06.338031  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1005 20:04:06.342314  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 20:04:06.628273  341844 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1005 20:04:06.628326  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1005 20:04:06.729043  341844 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 20:04:06.729143  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1005 20:04:06.831750  341844 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1005 20:04:06.831865  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1005 20:04:07.025638  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:07.129526  341844 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.291704857s)
	I1005 20:04:07.129634  341844 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1005 20:04:07.130194  341844 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1005 20:04:07.130272  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1005 20:04:07.229634  341844 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1005 20:04:07.229733  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1005 20:04:07.241152  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 20:04:07.330967  341844 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1005 20:04:07.331058  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1005 20:04:07.622090  341844 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1005 20:04:07.622124  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1005 20:04:07.733397  341844 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1005 20:04:07.733493  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1005 20:04:07.740181  341844 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1005 20:04:07.740283  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1005 20:04:07.939866  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1005 20:04:08.138449  341844 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1005 20:04:08.138548  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1005 20:04:08.520898  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1005 20:04:08.638646  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.7007717s)
	I1005 20:04:09.032648  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:09.842441  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.814187945s)
	I1005 20:04:11.027373  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.992037276s)
	I1005 20:04:11.027413  341844 addons.go:467] Verifying addon ingress=true in "addons-029116"
	I1005 20:04:11.028864  341844 out.go:177] * Verifying ingress addon...
	I1005 20:04:11.027512  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.986474724s)
	I1005 20:04:11.027566  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.902054688s)
	I1005 20:04:11.027620  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.699717456s)
	I1005 20:04:11.027692  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.406815452s)
	I1005 20:04:11.027750  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.288693676s)
	I1005 20:04:11.027826  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.685471316s)
	I1005 20:04:11.027940  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.786691748s)
	I1005 20:04:11.028011  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.088037431s)
	I1005 20:04:11.030265  341844 addons.go:467] Verifying addon registry=true in "addons-029116"
	I1005 20:04:11.030317  341844 addons.go:467] Verifying addon metrics-server=true in "addons-029116"
	W1005 20:04:11.030330  341844 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1005 20:04:11.030359  341844 retry.go:31] will retry after 183.624826ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1005 20:04:11.032757  341844 out.go:177] * Verifying registry addon...
	I1005 20:04:11.031144  341844 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1005 20:04:11.035059  341844 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1005 20:04:11.039771  341844 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1005 20:04:11.040535  341844 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1005 20:04:11.040560  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:11.040715  341844 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1005 20:04:11.040741  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:11.044485  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:11.044539  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:11.214805  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 20:04:11.325487  341844 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1005 20:04:11.325571  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:11.354196  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:11.449048  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:11.532846  341844 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1005 20:04:11.548858  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:11.549369  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:11.555112  341844 addons.go:231] Setting addon gcp-auth=true in "addons-029116"
	I1005 20:04:11.555229  341844 host.go:66] Checking if "addons-029116" exists ...
	I1005 20:04:11.555784  341844 cli_runner.go:164] Run: docker container inspect addons-029116 --format={{.State.Status}}
	I1005 20:04:11.577498  341844 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1005 20:04:11.577551  341844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029116
	I1005 20:04:11.595718  341844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/addons-029116/id_rsa Username:docker}
	I1005 20:04:12.047976  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.527007809s)
	I1005 20:04:12.048030  341844 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-029116"
	I1005 20:04:12.051363  341844 out.go:177] * Verifying csi-hostpath-driver addon...
	I1005 20:04:12.050260  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:12.050502  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:12.054045  341844 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1005 20:04:12.058815  341844 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1005 20:04:12.058840  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:12.063202  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:12.460815  341844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.245931938s)
	I1005 20:04:12.462718  341844 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1005 20:04:12.464578  341844 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 20:04:12.466237  341844 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1005 20:04:12.466268  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1005 20:04:12.485792  341844 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1005 20:04:12.485845  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1005 20:04:12.504723  341844 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1005 20:04:12.504751  341844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1005 20:04:12.523550  341844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1005 20:04:12.550151  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:12.550476  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:12.568557  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:13.122067  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:13.127670  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:13.129794  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:13.221720  341844 addons.go:467] Verifying addon gcp-auth=true in "addons-029116"
	I1005 20:04:13.223414  341844 out.go:177] * Verifying gcp-auth addon...
	I1005 20:04:13.226029  341844 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1005 20:04:13.230064  341844 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1005 20:04:13.230091  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:13.239339  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:13.621957  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:13.622936  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:13.625138  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:13.743882  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:13.948840  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:14.052942  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:14.053407  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:14.121489  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:14.244424  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:14.550617  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:14.551527  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:14.621894  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:14.743460  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:15.051957  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:15.051991  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:15.120907  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:15.244688  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:15.549858  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:15.550181  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:15.568787  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:15.743420  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:15.949460  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:16.049684  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:16.049978  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:16.070240  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:16.243392  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:16.549817  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:16.550226  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:16.568251  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:16.743909  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:17.049282  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:17.049504  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:17.069032  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:17.243380  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:17.549491  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:17.549979  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:17.567894  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:17.743411  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:18.049129  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:18.049261  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:18.068072  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:18.252741  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:18.448396  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:18.549073  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:18.549278  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:18.568327  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:18.743653  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:19.049646  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:19.049896  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:19.067697  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:19.242824  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:19.549247  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:19.549465  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:19.568416  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:19.743847  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:20.048751  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:20.049030  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:20.069166  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:20.243429  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:20.448494  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:20.549267  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:20.549485  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:20.568517  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:20.743605  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:21.049145  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:21.049334  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:21.068386  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:21.243825  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:21.548910  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:21.549120  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:21.567957  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:21.743333  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:22.049216  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:22.049344  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:22.068232  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:22.243346  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:22.549447  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:22.549464  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:22.567571  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:22.743607  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:22.947666  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:23.049695  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:23.049764  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:23.067726  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:23.242862  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:23.549200  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:23.549560  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:23.568326  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:23.743654  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:24.049577  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:24.049869  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:24.067605  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:24.243898  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:24.548964  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:24.549245  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:24.568351  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:24.743523  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:24.948449  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:25.049115  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:25.049488  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:25.068289  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:25.243755  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:25.549588  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:25.550003  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:25.567708  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:25.742943  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:26.049063  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:26.049236  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:26.068040  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:26.243869  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:26.548863  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:26.549087  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:26.567992  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:26.743696  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:27.049533  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:27.049571  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:27.067579  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:27.243926  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:27.448279  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:27.549066  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:27.549372  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:27.568257  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:27.743835  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:28.048490  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:28.048742  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:28.067272  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:28.243551  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:28.549411  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:28.549835  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:28.568490  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:28.745213  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:29.049278  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:29.049497  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:29.068412  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:29.244014  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:29.549020  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:29.549122  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:29.568374  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:29.743787  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:29.947856  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:30.048867  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:30.049116  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:30.067626  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:30.242830  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:30.548798  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:30.549307  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:30.567867  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:30.743153  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:31.049475  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:31.049669  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:31.068777  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:31.243387  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:31.549583  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:31.549926  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:31.567700  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:31.743079  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:31.948050  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:32.048920  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:32.049287  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:32.067810  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:32.243163  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:32.550614  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:32.550760  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:32.567666  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:32.742668  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:33.049760  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:33.049960  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:33.067814  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:33.243024  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:33.548876  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:33.549039  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:33.567667  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:33.742914  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:34.048663  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:34.048570  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:34.067483  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:34.243555  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:34.447513  341844 node_ready.go:58] node "addons-029116" has status "Ready":"False"
	I1005 20:04:34.549450  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:34.549939  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:34.568263  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:34.743318  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:35.049042  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:35.049265  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:35.067980  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:35.243534  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:35.550395  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:35.550715  341844 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1005 20:04:35.550738  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:35.627124  341844 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1005 20:04:35.627154  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:35.748796  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:35.948693  341844 node_ready.go:49] node "addons-029116" has status "Ready":"True"
	I1005 20:04:35.948737  341844 node_ready.go:38] duration metric: took 31.109780741s waiting for node "addons-029116" to be "Ready" ...
	I1005 20:04:35.948751  341844 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:04:35.958911  341844 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gwvr7" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:36.050587  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:36.051167  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:36.130261  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:36.243346  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:36.550509  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:36.550522  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:36.628233  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:36.752078  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:37.125873  341844 pod_ready.go:92] pod "coredns-5dd5756b68-gwvr7" in "kube-system" namespace has status "Ready":"True"
	I1005 20:04:37.125915  341844 pod_ready.go:81] duration metric: took 1.166958998s waiting for pod "coredns-5dd5756b68-gwvr7" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:37.125946  341844 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-029116" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:37.128290  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:37.132986  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:37.135784  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:37.139365  341844 pod_ready.go:92] pod "etcd-addons-029116" in "kube-system" namespace has status "Ready":"True"
	I1005 20:04:37.139402  341844 pod_ready.go:81] duration metric: took 13.445611ms waiting for pod "etcd-addons-029116" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:37.139422  341844 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-029116" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:37.147661  341844 pod_ready.go:92] pod "kube-apiserver-addons-029116" in "kube-system" namespace has status "Ready":"True"
	I1005 20:04:37.147687  341844 pod_ready.go:81] duration metric: took 8.256142ms waiting for pod "kube-apiserver-addons-029116" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:37.147697  341844 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-029116" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:37.229648  341844 pod_ready.go:92] pod "kube-controller-manager-addons-029116" in "kube-system" namespace has status "Ready":"True"
	I1005 20:04:37.229692  341844 pod_ready.go:81] duration metric: took 81.985734ms waiting for pod "kube-controller-manager-addons-029116" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:37.229713  341844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fsmq4" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:37.244736  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:37.550514  341844 pod_ready.go:92] pod "kube-proxy-fsmq4" in "kube-system" namespace has status "Ready":"True"
	I1005 20:04:37.550543  341844 pod_ready.go:81] duration metric: took 320.820814ms waiting for pod "kube-proxy-fsmq4" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:37.550557  341844 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-029116" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:37.550751  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:37.551702  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:37.626298  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:37.744352  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:37.948484  341844 pod_ready.go:92] pod "kube-scheduler-addons-029116" in "kube-system" namespace has status "Ready":"True"
	I1005 20:04:37.948512  341844 pod_ready.go:81] duration metric: took 397.947318ms waiting for pod "kube-scheduler-addons-029116" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:37.948523  341844 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-mkm8k" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:38.050137  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:38.050560  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:38.069198  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:38.243740  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:38.549779  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:38.550268  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:38.568631  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:38.743408  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:39.051948  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:39.053006  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:39.069707  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:39.243770  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:39.550078  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:39.550130  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:39.569020  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:39.743492  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:40.050152  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:40.050356  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:40.070406  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:40.245054  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:40.256664  341844 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mkm8k" in "kube-system" namespace has status "Ready":"False"
	I1005 20:04:40.549861  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:40.550526  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:40.569346  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:40.744902  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:41.049984  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:41.050092  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:41.070229  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:41.243487  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:41.549950  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:41.550190  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:41.569397  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:41.744193  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:42.049729  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:42.049786  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:42.070598  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:42.244296  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:42.549870  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:42.550210  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:42.570010  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:42.744014  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:42.755223  341844 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mkm8k" in "kube-system" namespace has status "Ready":"False"
	I1005 20:04:43.049453  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:43.050040  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:43.069524  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:43.243491  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:43.549082  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:43.549221  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:43.568835  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:43.743685  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:44.049565  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:44.049631  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:44.070253  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:44.244306  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:44.553745  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:44.558401  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:44.568438  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:44.743431  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:44.755405  341844 pod_ready.go:102] pod "metrics-server-7c66d45ddc-mkm8k" in "kube-system" namespace has status "Ready":"False"
	I1005 20:04:45.050033  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:45.050039  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:45.069228  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:45.243996  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:45.549563  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:45.549804  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:45.569633  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:45.743007  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:46.050021  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:46.050859  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:46.124994  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:46.243631  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:46.256527  341844 pod_ready.go:92] pod "metrics-server-7c66d45ddc-mkm8k" in "kube-system" namespace has status "Ready":"True"
	I1005 20:04:46.256553  341844 pod_ready.go:81] duration metric: took 8.308023415s waiting for pod "metrics-server-7c66d45ddc-mkm8k" in "kube-system" namespace to be "Ready" ...
	I1005 20:04:46.256573  341844 pod_ready.go:38] duration metric: took 10.307804681s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:04:46.256595  341844 api_server.go:52] waiting for apiserver process to appear ...
	I1005 20:04:46.256662  341844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:04:46.324607  341844 api_server.go:72] duration metric: took 41.735256234s to wait for apiserver process to appear ...
	I1005 20:04:46.324687  341844 api_server.go:88] waiting for apiserver healthz status ...
	I1005 20:04:46.324723  341844 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1005 20:04:46.330588  341844 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1005 20:04:46.332065  341844 api_server.go:141] control plane version: v1.28.2
	I1005 20:04:46.332105  341844 api_server.go:131] duration metric: took 7.396225ms to wait for apiserver health ...
	I1005 20:04:46.332117  341844 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 20:04:46.342489  341844 system_pods.go:59] 18 kube-system pods found
	I1005 20:04:46.342542  341844 system_pods.go:61] "coredns-5dd5756b68-gwvr7" [b3c75058-b239-4f41-bd85-9c75cd9cad56] Running
	I1005 20:04:46.342556  341844 system_pods.go:61] "csi-hostpath-attacher-0" [89fe299a-834d-4146-9683-76e32607358a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1005 20:04:46.342569  341844 system_pods.go:61] "csi-hostpath-resizer-0" [7124e88b-6e13-49f7-8392-0b8f832313b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1005 20:04:46.342580  341844 system_pods.go:61] "csi-hostpathplugin-n7kt7" [335f7558-d3c6-40e5-be1b-ceaab0f5d936] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1005 20:04:46.342589  341844 system_pods.go:61] "etcd-addons-029116" [4287aa8d-7e7f-4ca7-ae10-02070bcce5fa] Running
	I1005 20:04:46.342604  341844 system_pods.go:61] "kindnet-c6742" [3aceecd9-2dd8-4290-a5cc-45a3aecc280b] Running
	I1005 20:04:46.342611  341844 system_pods.go:61] "kube-apiserver-addons-029116" [2ad77ab2-e6b8-423c-ab94-92f85ef7524e] Running
	I1005 20:04:46.342624  341844 system_pods.go:61] "kube-controller-manager-addons-029116" [c152285a-cec1-46bc-8b96-c0e848a17c62] Running
	I1005 20:04:46.342642  341844 system_pods.go:61] "kube-ingress-dns-minikube" [210e60cb-c9cb-4971-b48e-368985be2277] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1005 20:04:46.342656  341844 system_pods.go:61] "kube-proxy-fsmq4" [787bedf8-895e-446a-9fbe-b1b94fbbe5ab] Running
	I1005 20:04:46.342663  341844 system_pods.go:61] "kube-scheduler-addons-029116" [4663b73a-1229-4ddf-8e9f-5046c83c210f] Running
	I1005 20:04:46.342670  341844 system_pods.go:61] "metrics-server-7c66d45ddc-mkm8k" [22545870-82ee-4342-915b-7e2b9b12b4c5] Running
	I1005 20:04:46.342685  341844 system_pods.go:61] "registry-hn8mj" [aa7dd669-eb26-4f18-b687-fa48e28bb06e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1005 20:04:46.342699  341844 system_pods.go:61] "registry-proxy-fhzqz" [370dc78f-71d6-4ae7-9f2a-8b5fb1cbd997] Running
	I1005 20:04:46.342710  341844 system_pods.go:61] "snapshot-controller-58dbcc7b99-ddhst" [67cfa8e7-3ac4-48f9-ab83-e4b719727b1b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 20:04:46.342728  341844 system_pods.go:61] "snapshot-controller-58dbcc7b99-tl8nz" [bff39c64-5489-4bd8-83d8-f5f36bf8b017] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 20:04:46.342740  341844 system_pods.go:61] "storage-provisioner" [1cfe9db2-962a-4aee-81ce-d3fcceba1ee2] Running
	I1005 20:04:46.342759  341844 system_pods.go:61] "tiller-deploy-7b677967b9-sr5rj" [ebf369dd-a3fd-4f09-b54b-34350f02a788] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1005 20:04:46.342769  341844 system_pods.go:74] duration metric: took 10.644401ms to wait for pod list to return data ...
	I1005 20:04:46.342785  341844 default_sa.go:34] waiting for default service account to be created ...
	I1005 20:04:46.347843  341844 default_sa.go:45] found service account: "default"
	I1005 20:04:46.347885  341844 default_sa.go:55] duration metric: took 5.08272ms for default service account to be created ...
	I1005 20:04:46.347899  341844 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 20:04:46.358531  341844 system_pods.go:86] 18 kube-system pods found
	I1005 20:04:46.358574  341844 system_pods.go:89] "coredns-5dd5756b68-gwvr7" [b3c75058-b239-4f41-bd85-9c75cd9cad56] Running
	I1005 20:04:46.358588  341844 system_pods.go:89] "csi-hostpath-attacher-0" [89fe299a-834d-4146-9683-76e32607358a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1005 20:04:46.358602  341844 system_pods.go:89] "csi-hostpath-resizer-0" [7124e88b-6e13-49f7-8392-0b8f832313b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1005 20:04:46.358616  341844 system_pods.go:89] "csi-hostpathplugin-n7kt7" [335f7558-d3c6-40e5-be1b-ceaab0f5d936] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1005 20:04:46.358628  341844 system_pods.go:89] "etcd-addons-029116" [4287aa8d-7e7f-4ca7-ae10-02070bcce5fa] Running
	I1005 20:04:46.358638  341844 system_pods.go:89] "kindnet-c6742" [3aceecd9-2dd8-4290-a5cc-45a3aecc280b] Running
	I1005 20:04:46.358651  341844 system_pods.go:89] "kube-apiserver-addons-029116" [2ad77ab2-e6b8-423c-ab94-92f85ef7524e] Running
	I1005 20:04:46.358662  341844 system_pods.go:89] "kube-controller-manager-addons-029116" [c152285a-cec1-46bc-8b96-c0e848a17c62] Running
	I1005 20:04:46.358676  341844 system_pods.go:89] "kube-ingress-dns-minikube" [210e60cb-c9cb-4971-b48e-368985be2277] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1005 20:04:46.358686  341844 system_pods.go:89] "kube-proxy-fsmq4" [787bedf8-895e-446a-9fbe-b1b94fbbe5ab] Running
	I1005 20:04:46.358697  341844 system_pods.go:89] "kube-scheduler-addons-029116" [4663b73a-1229-4ddf-8e9f-5046c83c210f] Running
	I1005 20:04:46.358708  341844 system_pods.go:89] "metrics-server-7c66d45ddc-mkm8k" [22545870-82ee-4342-915b-7e2b9b12b4c5] Running
	I1005 20:04:46.358718  341844 system_pods.go:89] "registry-hn8mj" [aa7dd669-eb26-4f18-b687-fa48e28bb06e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1005 20:04:46.358728  341844 system_pods.go:89] "registry-proxy-fhzqz" [370dc78f-71d6-4ae7-9f2a-8b5fb1cbd997] Running
	I1005 20:04:46.358741  341844 system_pods.go:89] "snapshot-controller-58dbcc7b99-ddhst" [67cfa8e7-3ac4-48f9-ab83-e4b719727b1b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 20:04:46.358757  341844 system_pods.go:89] "snapshot-controller-58dbcc7b99-tl8nz" [bff39c64-5489-4bd8-83d8-f5f36bf8b017] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 20:04:46.358769  341844 system_pods.go:89] "storage-provisioner" [1cfe9db2-962a-4aee-81ce-d3fcceba1ee2] Running
	I1005 20:04:46.358781  341844 system_pods.go:89] "tiller-deploy-7b677967b9-sr5rj" [ebf369dd-a3fd-4f09-b54b-34350f02a788] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1005 20:04:46.358796  341844 system_pods.go:126] duration metric: took 10.888584ms to wait for k8s-apps to be running ...
	I1005 20:04:46.358811  341844 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 20:04:46.358874  341844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:04:46.430216  341844 system_svc.go:56] duration metric: took 71.391251ms WaitForService to wait for kubelet.
	I1005 20:04:46.430247  341844 kubeadm.go:581] duration metric: took 41.840920915s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 20:04:46.430277  341844 node_conditions.go:102] verifying NodePressure condition ...
	I1005 20:04:46.433648  341844 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1005 20:04:46.433685  341844 node_conditions.go:123] node cpu capacity is 8
	I1005 20:04:46.433702  341844 node_conditions.go:105] duration metric: took 3.418259ms to run NodePressure ...
	I1005 20:04:46.433721  341844 start.go:228] waiting for startup goroutines ...
	I1005 20:04:46.549946  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:46.550699  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:46.568260  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:46.743582  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:47.050254  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:47.050297  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:47.069495  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:47.243208  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:47.549876  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:47.549930  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:47.569245  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:47.744027  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:48.049567  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:48.049564  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:48.069371  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:48.243027  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:48.549735  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:48.549860  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:48.570458  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:48.744044  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:49.049282  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:49.049583  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:49.070022  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:49.243924  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:49.550809  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:49.551915  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:49.624188  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:49.744542  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:50.050221  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:50.050221  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:50.069216  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:50.243303  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:50.550264  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:50.550872  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:50.569401  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:50.743700  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:51.053813  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:51.054970  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:51.069456  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:51.244669  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:51.632644  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:51.644690  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:51.724252  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:51.746591  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:52.124330  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:52.126251  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:52.130760  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:52.243268  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:52.550366  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:52.550627  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:52.625510  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:52.744089  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:53.049784  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:53.049824  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:53.069759  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:53.243537  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:53.550534  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:53.551607  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:53.570216  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:53.744222  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:54.049902  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:54.049980  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:54.068971  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:54.244608  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:54.549842  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:54.550169  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:54.568838  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:54.744353  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:55.050401  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:55.050492  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:55.070188  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:55.244728  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:55.549391  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:55.549539  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:55.569492  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:55.743374  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:56.050298  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:56.050408  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:56.070354  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:56.244089  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:56.550000  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:56.550412  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:56.569055  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:56.743764  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:57.049898  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:57.050084  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:57.072091  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:57.244040  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:57.549646  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:57.549715  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:57.568674  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:57.743469  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:58.053170  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:58.053301  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:58.069526  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:58.243572  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:58.550376  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:58.550468  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:58.569491  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:58.743543  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:59.049857  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:59.049909  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:59.069218  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:59.243663  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:04:59.549755  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:04:59.549781  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:04:59.568631  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:04:59.743281  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:00.051310  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:00.051451  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:00.125465  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:00.244375  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:00.550161  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:00.550389  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:00.569570  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:00.743531  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:01.050148  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:01.050289  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:01.069271  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:01.250453  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:01.550487  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:01.550627  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:01.569444  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:01.743169  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:02.053863  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:02.054173  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:02.068678  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:02.243046  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:02.551825  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:02.552027  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:02.568824  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:02.744319  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:03.049814  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:03.049861  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:03.069358  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:03.243962  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:03.549367  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:03.549565  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:03.569442  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:03.743187  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:04.050893  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:04.051968  341844 kapi.go:107] duration metric: took 53.016905052s to wait for kubernetes.io/minikube-addons=registry ...
	I1005 20:05:04.070454  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:04.244123  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:04.550384  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:04.569039  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:04.744103  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:05.049219  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:05.069780  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:05.246172  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:05.642013  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:05.643581  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:05.824673  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:06.050470  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:06.126807  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:06.245446  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:06.630261  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:06.632310  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:06.744159  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:07.050371  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:07.126809  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:07.243705  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:07.550360  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:07.569059  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:07.744180  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:08.050563  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:08.068833  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:08.243732  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:08.549090  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:08.569885  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:08.744073  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:09.049533  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:09.069143  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:09.243840  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:09.549132  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:09.570298  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:09.743766  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:10.049751  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:10.069198  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:10.321981  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:10.549710  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:10.569689  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:10.743756  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:11.049488  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:11.069768  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:11.244183  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:11.549537  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:11.569534  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:11.743766  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:12.051013  341844 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:12.069454  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:12.243692  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:12.549022  341844 kapi.go:107] duration metric: took 1m1.517874448s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1005 20:05:12.568730  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:12.743529  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:13.070644  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:13.242971  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:13.570061  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:13.743730  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:14.069387  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:14.243100  341844 kapi.go:107] duration metric: took 1m1.017071703s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1005 20:05:14.244801  341844 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-029116 cluster.
	I1005 20:05:14.246251  341844 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1005 20:05:14.247722  341844 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1005 20:05:14.624473  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:15.069391  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:15.569554  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:16.069403  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:16.569676  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:17.068714  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:17.569757  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:18.068758  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:18.570224  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:19.069690  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:19.569412  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:20.069245  341844 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:20.568790  341844 kapi.go:107] duration metric: took 1m8.514753115s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1005 20:05:20.571665  341844 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, inspektor-gadget, helm-tiller, ingress-dns, metrics-server, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1005 20:05:20.573189  341844 addons.go:502] enable addons completed in 1m16.098419479s: enabled=[cloud-spanner storage-provisioner inspektor-gadget helm-tiller ingress-dns metrics-server default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1005 20:05:20.573234  341844 start.go:233] waiting for cluster config update ...
	I1005 20:05:20.573256  341844 start.go:242] writing updated cluster config ...
	I1005 20:05:20.573587  341844 ssh_runner.go:195] Run: rm -f paused
	I1005 20:05:20.628158  341844 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 20:05:20.630225  341844 out.go:177] * Done! kubectl is now configured to use "addons-029116" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.617396470Z" level=info msg="Removed pod sandbox: 8b65a1ce749c00c392b422f69ece6dae98706f75031d918821599e69ed58de52" id=62a78a23-7608-45c5-927a-6f89503e2637 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.618023506Z" level=info msg="Stopping pod sandbox: 5ab1235849ce0b08202117adecd4161e2e5333c2ab990935c27b54ec0499c212" id=017fcbbe-c072-4433-b6d0-f028be2257bc name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.618070191Z" level=info msg="Stopped pod sandbox (already stopped): 5ab1235849ce0b08202117adecd4161e2e5333c2ab990935c27b54ec0499c212" id=017fcbbe-c072-4433-b6d0-f028be2257bc name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.618492655Z" level=info msg="Removing pod sandbox: 5ab1235849ce0b08202117adecd4161e2e5333c2ab990935c27b54ec0499c212" id=ad693a07-60ac-49c0-a34b-db40dd822684 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.625293116Z" level=info msg="Removed pod sandbox: 5ab1235849ce0b08202117adecd4161e2e5333c2ab990935c27b54ec0499c212" id=ad693a07-60ac-49c0-a34b-db40dd822684 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.625894542Z" level=info msg="Stopping pod sandbox: 4e3e92c1c3b1e14095cbff73506de95dbc458ee1b9eefd09b10d229cf4ea1044" id=0956fc7b-6e2f-4872-8c62-cb788dc80c31 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.625934438Z" level=info msg="Stopped pod sandbox (already stopped): 4e3e92c1c3b1e14095cbff73506de95dbc458ee1b9eefd09b10d229cf4ea1044" id=0956fc7b-6e2f-4872-8c62-cb788dc80c31 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.626245509Z" level=info msg="Removing pod sandbox: 4e3e92c1c3b1e14095cbff73506de95dbc458ee1b9eefd09b10d229cf4ea1044" id=5ed3f8ab-302d-4b98-ba35-18cf90534d7d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.632706905Z" level=info msg="Removed pod sandbox: 4e3e92c1c3b1e14095cbff73506de95dbc458ee1b9eefd09b10d229cf4ea1044" id=5ed3f8ab-302d-4b98-ba35-18cf90534d7d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.633270021Z" level=info msg="Stopping pod sandbox: 2ed4734d0d0f078f05e1ab1b2b65e3002780e21cf17204db7dcc1c4086425a0b" id=b04328d4-c23a-4aea-8447-81b9adb2f1ae name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.633310028Z" level=info msg="Stopped pod sandbox (already stopped): 2ed4734d0d0f078f05e1ab1b2b65e3002780e21cf17204db7dcc1c4086425a0b" id=b04328d4-c23a-4aea-8447-81b9adb2f1ae name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.633679078Z" level=info msg="Removing pod sandbox: 2ed4734d0d0f078f05e1ab1b2b65e3002780e21cf17204db7dcc1c4086425a0b" id=6a2a4079-2166-4cbc-b332-00796f4dbdc6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.640646908Z" level=info msg="Removed pod sandbox: 2ed4734d0d0f078f05e1ab1b2b65e3002780e21cf17204db7dcc1c4086425a0b" id=6a2a4079-2166-4cbc-b332-00796f4dbdc6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.641262351Z" level=info msg="Stopping pod sandbox: ab17cfd5d7c311ce07bad0d7f2d4513235e4d854e3229a738e62379bc1ea308c" id=633ad482-fce9-4f2c-aa7b-cbf4abf60278 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.641311694Z" level=info msg="Stopped pod sandbox (already stopped): ab17cfd5d7c311ce07bad0d7f2d4513235e4d854e3229a738e62379bc1ea308c" id=633ad482-fce9-4f2c-aa7b-cbf4abf60278 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.641685518Z" level=info msg="Removing pod sandbox: ab17cfd5d7c311ce07bad0d7f2d4513235e4d854e3229a738e62379bc1ea308c" id=714b2852-315f-4b21-bc94-36ceac06a9da name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.648170505Z" level=info msg="Removed pod sandbox: ab17cfd5d7c311ce07bad0d7f2d4513235e4d854e3229a738e62379bc1ea308c" id=714b2852-315f-4b21-bc94-36ceac06a9da name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.648769362Z" level=info msg="Stopping pod sandbox: 651adc6779604ea204d108174b707d9c417bd433d04ce4f4ed386fd8d883d961" id=1afe7a84-5d3d-47c6-b4b0-0231c91b625e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.648814516Z" level=info msg="Stopped pod sandbox (already stopped): 651adc6779604ea204d108174b707d9c417bd433d04ce4f4ed386fd8d883d961" id=1afe7a84-5d3d-47c6-b4b0-0231c91b625e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.649215905Z" level=info msg="Removing pod sandbox: 651adc6779604ea204d108174b707d9c417bd433d04ce4f4ed386fd8d883d961" id=12faa187-9dde-420f-b4f2-e2efdb1dbf5a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.658307736Z" level=info msg="Removed pod sandbox: 651adc6779604ea204d108174b707d9c417bd433d04ce4f4ed386fd8d883d961" id=12faa187-9dde-420f-b4f2-e2efdb1dbf5a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.658878286Z" level=info msg="Stopping pod sandbox: ecd3b95f04b1d3215e217b086d49c33fa72c5b2e6e646f07bcd871997f499c27" id=20dc6a22-4232-42aa-9d88-3c724014158d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.658923355Z" level=info msg="Stopped pod sandbox (already stopped): ecd3b95f04b1d3215e217b086d49c33fa72c5b2e6e646f07bcd871997f499c27" id=20dc6a22-4232-42aa-9d88-3c724014158d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.659246099Z" level=info msg="Removing pod sandbox: ecd3b95f04b1d3215e217b086d49c33fa72c5b2e6e646f07bcd871997f499c27" id=6672bf72-872c-4f73-a5b0-2db6e576ffe4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 05 20:07:52 addons-029116 crio[944]: time="2023-10-05 20:07:52.666572306Z" level=info msg="Removed pod sandbox: ecd3b95f04b1d3215e217b086d49c33fa72c5b2e6e646f07bcd871997f499c27" id=6672bf72-872c-4f73-a5b0-2db6e576ffe4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4d526df38e895       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6            9 seconds ago       Running             hello-world-app           0                   6440fe698a83a       hello-world-app-5d77478584-sbjd2
	03420b2e908cd       ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c              2 minutes ago       Running             headlamp                  0                   ed92bf9bbb6e9       headlamp-58b88cff49-6nxs6
	4f6a0e092c868       docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14                    2 minutes ago       Running             nginx                     0                   dd8cfbbfa5c11       nginx
	77c66159354dc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06       2 minutes ago       Running             gcp-auth                  0                   cdc8509dac014       gcp-auth-d4c87556c-2wjmz
	2b99dd65c5cef       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef   3 minutes ago       Running             local-path-provisioner    0                   e653ee89fd56b       local-path-provisioner-78b46b4d5c-krppq
	650c4b293c0e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   fe5c6456eb623       storage-provisioner
	b415d8a422144       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                   3 minutes ago       Running             coredns                   0                   711fc7d1848d3       coredns-5dd5756b68-gwvr7
	4f04c8be271d2       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                                   3 minutes ago       Running             kube-proxy                0                   40dfc5502bec2       kube-proxy-fsmq4
	1b05525918bca       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                   3 minutes ago       Running             kindnet-cni               0                   2e1a43adf5a83       kindnet-c6742
	5f7b78b6e6e33       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                                   4 minutes ago       Running             kube-apiserver            0                   9882356b2d091       kube-apiserver-addons-029116
	37a4f09ac77c0       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                                   4 minutes ago       Running             kube-scheduler            0                   1b1a69e62f287       kube-scheduler-addons-029116
	4fdf46250be45       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                   4 minutes ago       Running             etcd                      0                   b4323ade0df09       etcd-addons-029116
	350933d7ea410       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                                   4 minutes ago       Running             kube-controller-manager   0                   66e07ae509c12       kube-controller-manager-addons-029116
	
	* 
	* ==> coredns [b415d8a42214463c2b81194d93775b47e3991f4ec75e76e754b3ed4298691249] <==
	* [INFO] 10.244.0.3:42598 - 63003 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106505s
	[INFO] 10.244.0.3:54442 - 62857 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.006788994s
	[INFO] 10.244.0.3:54442 - 25229 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.008041067s
	[INFO] 10.244.0.3:51355 - 53325 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004239986s
	[INFO] 10.244.0.3:51355 - 23889 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006516578s
	[INFO] 10.244.0.3:47042 - 55036 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005790554s
	[INFO] 10.244.0.3:47042 - 38648 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00616051s
	[INFO] 10.244.0.3:56044 - 52310 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083973s
	[INFO] 10.244.0.3:56044 - 26715 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134889s
	[INFO] 10.244.0.19:35536 - 49309 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000212556s
	[INFO] 10.244.0.19:33643 - 33461 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00024206s
	[INFO] 10.244.0.19:47632 - 45792 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179184s
	[INFO] 10.244.0.19:35671 - 17483 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000162622s
	[INFO] 10.244.0.19:53587 - 55042 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116466s
	[INFO] 10.244.0.19:53215 - 56512 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111857s
	[INFO] 10.244.0.19:45699 - 58074 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005569889s
	[INFO] 10.244.0.19:59971 - 61333 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006045362s
	[INFO] 10.244.0.19:43210 - 48883 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005081804s
	[INFO] 10.244.0.19:33478 - 18832 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006639078s
	[INFO] 10.244.0.19:50177 - 7440 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005538882s
	[INFO] 10.244.0.19:52527 - 48689 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007833095s
	[INFO] 10.244.0.19:59204 - 31372 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000996441s
	[INFO] 10.244.0.19:45549 - 54346 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001017456s
	[INFO] 10.244.0.23:53427 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000301164s
	[INFO] 10.244.0.23:58300 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000171208s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-029116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-029116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=addons-029116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T20_03_52_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-029116
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 20:03:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-029116
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 20:07:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 20:06:24 +0000   Thu, 05 Oct 2023 20:03:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 20:06:24 +0000   Thu, 05 Oct 2023 20:03:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 20:06:24 +0000   Thu, 05 Oct 2023 20:03:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 20:06:24 +0000   Thu, 05 Oct 2023 20:04:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-029116
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f946e77153645d2b918fc275c82adb2
	  System UUID:                bb65f877-6aac-4bc5-948e-536e51a412c7
	  Boot ID:                    442b7abc-f6f6-4fc0-9fdb-d53241b6517a
	  Kernel Version:             5.15.0-1044-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-sbjd2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-d4c87556c-2wjmz                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  headlamp                    headlamp-58b88cff49-6nxs6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 coredns-5dd5756b68-gwvr7                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m50s
	  kube-system                 etcd-addons-029116                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m3s
	  kube-system                 kindnet-c6742                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m51s
	  kube-system                 kube-apiserver-addons-029116               250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-controller-manager-addons-029116      200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-proxy-fsmq4                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-scheduler-addons-029116               100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  local-path-storage          local-path-provisioner-78b46b4d5c-krppq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m49s  kube-proxy       
	  Normal  Starting                 4m3s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m3s   kubelet          Node addons-029116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s   kubelet          Node addons-029116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s   kubelet          Node addons-029116 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m51s  node-controller  Node addons-029116 event: Registered Node addons-029116 in Controller
	  Normal  NodeReady                3m19s  kubelet          Node addons-029116 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000022] ll header: 00000000: 02 42 e5 4a 8f 17 02 42 c0 a8 4c 02 08 00
	[  +0.831893] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-fa023bbcfc96
	[  +0.000025] ll header: 00000000: 02 42 e5 4a 8f 17 02 42 c0 a8 4c 02 08 00
	[  +1.667822] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-fa023bbcfc96
	[  +0.000006] ll header: 00000000: 02 42 e5 4a 8f 17 02 42 c0 a8 4c 02 08 00
	[  +3.451663] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-fa023bbcfc96
	[  +0.000007] ll header: 00000000: 02 42 e5 4a 8f 17 02 42 c0 a8 4c 02 08 00
	[  +6.655347] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-fa023bbcfc96
	[  +0.000008] ll header: 00000000: 02 42 e5 4a 8f 17 02 42 c0 a8 4c 02 08 00
	[Oct 5 19:41] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-fa023bbcfc96
	[  +0.000009] ll header: 00000000: 02 42 e5 4a 8f 17 02 42 c0 a8 4c 02 08 00
	[Oct 5 20:05] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba 32 1f 41 4d 2b b2 e9 c3 92 de 11 08 00
	[  +1.002921] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ba 32 1f 41 4d 2b b2 e9 c3 92 de 11 08 00
	[  +2.019793] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba 32 1f 41 4d 2b b2 e9 c3 92 de 11 08 00
	[  +4.187617] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba 32 1f 41 4d 2b b2 e9 c3 92 de 11 08 00
	[  +8.191155] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ba 32 1f 41 4d 2b b2 e9 c3 92 de 11 08 00
	[Oct 5 20:06] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: ba 32 1f 41 4d 2b b2 e9 c3 92 de 11 08 00
	[ +32.508838] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ba 32 1f 41 4d 2b b2 e9 c3 92 de 11 08 00
	
	* 
	* ==> etcd [4fdf46250be45cc39c0ec055b7120a2006454f9908982a8620cf79064000b07c] <==
	* {"level":"info","ts":"2023-10-05T20:04:08.330609Z","caller":"traceutil/trace.go:171","msg":"trace[2067290818] linearizableReadLoop","detail":"{readStateIndex:419; appliedIndex:418; }","duration":"106.237672ms","start":"2023-10-05T20:04:08.224349Z","end":"2023-10-05T20:04:08.330587Z","steps":["trace[2067290818] 'read index received'  (duration: 95.374553ms)","trace[2067290818] 'applied index is now lower than readState.Index'  (duration: 10.861247ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-05T20:04:08.33078Z","caller":"traceutil/trace.go:171","msg":"trace[1107452090] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"287.193585ms","start":"2023-10-05T20:04:08.043563Z","end":"2023-10-05T20:04:08.330756Z","steps":["trace[1107452090] 'process raft request'  (duration: 195.076109ms)","trace[1107452090] 'compare'  (duration: 91.414554ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-05T20:04:08.33083Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.474948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-05T20:04:08.331981Z","caller":"traceutil/trace.go:171","msg":"trace[279926715] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:404; }","duration":"107.648229ms","start":"2023-10-05T20:04:08.224317Z","end":"2023-10-05T20:04:08.331965Z","steps":["trace[279926715] 'agreement among raft nodes before linearized reading'  (duration: 106.443004ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:04:08.825185Z","caller":"traceutil/trace.go:171","msg":"trace[2081577414] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"103.229945ms","start":"2023-10-05T20:04:08.721936Z","end":"2023-10-05T20:04:08.825166Z","steps":["trace[2081577414] 'process raft request'  (duration: 98.715204ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:04:08.825747Z","caller":"traceutil/trace.go:171","msg":"trace[2142426195] linearizableReadLoop","detail":"{readStateIndex:439; appliedIndex:437; }","duration":"103.448237ms","start":"2023-10-05T20:04:08.722275Z","end":"2023-10-05T20:04:08.825724Z","steps":["trace[2142426195] 'read index received'  (duration: 9.986477ms)","trace[2142426195] 'applied index is now lower than readState.Index'  (duration: 93.460131ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-05T20:04:08.825893Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.948666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/local-path-provisioner-role\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-05T20:04:08.826966Z","caller":"traceutil/trace.go:171","msg":"trace[1349290120] range","detail":"{range_begin:/registry/clusterroles/local-path-provisioner-role; range_end:; response_count:0; response_revision:425; }","duration":"106.05698ms","start":"2023-10-05T20:04:08.720891Z","end":"2023-10-05T20:04:08.826948Z","steps":["trace[1349290120] 'agreement among raft nodes before linearized reading'  (duration: 104.92031ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:04:09.53629Z","caller":"traceutil/trace.go:171","msg":"trace[179674198] linearizableReadLoop","detail":"{readStateIndex:481; appliedIndex:481; }","duration":"104.592399ms","start":"2023-10-05T20:04:09.431515Z","end":"2023-10-05T20:04:09.536108Z","steps":["trace[179674198] 'read index received'  (duration: 104.587141ms)","trace[179674198] 'applied index is now lower than readState.Index'  (duration: 4.163µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-05T20:04:09.539569Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.074556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/tiller-deploy-7b677967b9\" ","response":"range_response_count:1 size:2960"}
	{"level":"info","ts":"2023-10-05T20:04:09.539717Z","caller":"traceutil/trace.go:171","msg":"trace[657632898] range","detail":"{range_begin:/registry/replicasets/kube-system/tiller-deploy-7b677967b9; range_end:; response_count:1; response_revision:464; }","duration":"108.227251ms","start":"2023-10-05T20:04:09.431464Z","end":"2023-10-05T20:04:09.539691Z","steps":["trace[657632898] 'agreement among raft nodes before linearized reading'  (duration: 104.682072ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:04:09.620346Z","caller":"traceutil/trace.go:171","msg":"trace[1346003377] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"200.346473ms","start":"2023-10-05T20:04:09.419957Z","end":"2023-10-05T20:04:09.620304Z","steps":["trace[1346003377] 'process raft request'  (duration: 200.06172ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-05T20:05:31.42402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.282915ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128024265794132816 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/registry-test.178b4ecdfc17fceb\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/registry-test.178b4ecdfc17fceb\" value_size:639 lease:8128024265794131997 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-10-05T20:05:31.424121Z","caller":"traceutil/trace.go:171","msg":"trace[1825062001] linearizableReadLoop","detail":"{readStateIndex:1250; appliedIndex:1248; }","duration":"174.597445ms","start":"2023-10-05T20:05:31.24951Z","end":"2023-10-05T20:05:31.424107Z","steps":["trace[1825062001] 'read index received'  (duration: 16.250487ms)","trace[1825062001] 'applied index is now lower than readState.Index'  (duration: 158.345603ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-05T20:05:31.424152Z","caller":"traceutil/trace.go:171","msg":"trace[1386512089] transaction","detail":"{read_only:false; response_revision:1210; number_of_response:1; }","duration":"181.677631ms","start":"2023-10-05T20:05:31.242445Z","end":"2023-10-05T20:05:31.424122Z","steps":["trace[1386512089] 'process raft request'  (duration: 77.795307ms)","trace[1386512089] 'compare'  (duration: 103.157483ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-05T20:05:31.424311Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.977254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-4b90cbef-9395-48c7-bf53-d29fd7509af3\" ","response":"range_response_count:1 size:2886"}
	{"level":"info","ts":"2023-10-05T20:05:31.424347Z","caller":"traceutil/trace.go:171","msg":"trace[1593865824] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-delete-pvc-4b90cbef-9395-48c7-bf53-d29fd7509af3; range_end:; response_count:1; response_revision:1210; }","duration":"155.020298ms","start":"2023-10-05T20:05:31.269316Z","end":"2023-10-05T20:05:31.424336Z","steps":["trace[1593865824] 'agreement among raft nodes before linearized reading'  (duration: 154.87377ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-05T20:05:31.42434Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.209494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/test-local-path\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-05T20:05:31.424476Z","caller":"traceutil/trace.go:171","msg":"trace[1112653094] range","detail":"{range_begin:/registry/pods/default/test-local-path; range_end:; response_count:0; response_revision:1210; }","duration":"154.351534ms","start":"2023-10-05T20:05:31.270112Z","end":"2023-10-05T20:05:31.424464Z","steps":["trace[1112653094] 'agreement among raft nodes before linearized reading'  (duration: 154.117116ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-05T20:05:31.424496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.012949ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2023-10-05T20:05:31.42453Z","caller":"traceutil/trace.go:171","msg":"trace[1662382139] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1210; }","duration":"175.045347ms","start":"2023-10-05T20:05:31.249471Z","end":"2023-10-05T20:05:31.424517Z","steps":["trace[1662382139] 'agreement among raft nodes before linearized reading'  (duration: 174.694676ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:05:31.592029Z","caller":"traceutil/trace.go:171","msg":"trace[483911106] linearizableReadLoop","detail":"{readStateIndex:1252; appliedIndex:1251; }","duration":"101.566731ms","start":"2023-10-05T20:05:31.490441Z","end":"2023-10-05T20:05:31.592008Z","steps":["trace[483911106] 'read index received'  (duration: 78.721542ms)","trace[483911106] 'applied index is now lower than readState.Index'  (duration: 22.844066ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-05T20:05:31.592127Z","caller":"traceutil/trace.go:171","msg":"trace[846092698] transaction","detail":"{read_only:false; response_revision:1212; number_of_response:1; }","duration":"159.335246ms","start":"2023-10-05T20:05:31.432766Z","end":"2023-10-05T20:05:31.592102Z","steps":["trace[846092698] 'process raft request'  (duration: 136.49134ms)","trace[846092698] 'compare'  (duration: 22.624938ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-05T20:05:31.592207Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.751184ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/gadget/gcp-auth\" ","response":"range_response_count:1 size:4437"}
	{"level":"info","ts":"2023-10-05T20:05:31.592261Z","caller":"traceutil/trace.go:171","msg":"trace[557174766] range","detail":"{range_begin:/registry/secrets/gadget/gcp-auth; range_end:; response_count:1; response_revision:1212; }","duration":"101.830941ms","start":"2023-10-05T20:05:31.490413Z","end":"2023-10-05T20:05:31.592244Z","steps":["trace[557174766] 'agreement among raft nodes before linearized reading'  (duration: 101.693769ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [77c66159354dca4833aa3178ab5fc168d8ff18b541b5084eb006cd25bd81a1de] <==
	* 2023/10/05 20:05:13 GCP Auth Webhook started!
	2023/10/05 20:05:21 Ready to marshal response ...
	2023/10/05 20:05:21 Ready to write response ...
	2023/10/05 20:05:21 Ready to marshal response ...
	2023/10/05 20:05:21 Ready to write response ...
	2023/10/05 20:05:22 Ready to marshal response ...
	2023/10/05 20:05:22 Ready to write response ...
	2023/10/05 20:05:30 Ready to marshal response ...
	2023/10/05 20:05:30 Ready to write response ...
	2023/10/05 20:05:31 Ready to marshal response ...
	2023/10/05 20:05:31 Ready to write response ...
	2023/10/05 20:05:35 Ready to marshal response ...
	2023/10/05 20:05:35 Ready to write response ...
	2023/10/05 20:05:35 Ready to marshal response ...
	2023/10/05 20:05:35 Ready to write response ...
	2023/10/05 20:05:35 Ready to marshal response ...
	2023/10/05 20:05:35 Ready to write response ...
	2023/10/05 20:05:43 Ready to marshal response ...
	2023/10/05 20:05:43 Ready to write response ...
	2023/10/05 20:06:14 Ready to marshal response ...
	2023/10/05 20:06:14 Ready to write response ...
	2023/10/05 20:06:46 Ready to marshal response ...
	2023/10/05 20:06:46 Ready to write response ...
	2023/10/05 20:07:43 Ready to marshal response ...
	2023/10/05 20:07:43 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  20:07:54 up  1:50,  0 users,  load average: 0.74, 0.85, 0.73
	Linux addons-029116 5.15.0-1044-gcp #52~20.04.1-Ubuntu SMP Wed Sep 20 16:25:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1b05525918bca7b4e5ba14509781e203c741d89702c88df4b04616c320294b22] <==
	* I1005 20:05:45.035724       1 main.go:227] handling current node
	I1005 20:05:55.040154       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:05:55.040184       1 main.go:227] handling current node
	I1005 20:06:05.051415       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:06:05.051444       1 main.go:227] handling current node
	I1005 20:06:15.056146       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:06:15.056173       1 main.go:227] handling current node
	I1005 20:06:25.069612       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:06:25.069667       1 main.go:227] handling current node
	I1005 20:06:35.074409       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:06:35.074450       1 main.go:227] handling current node
	I1005 20:06:45.086947       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:06:45.086981       1 main.go:227] handling current node
	I1005 20:06:55.092227       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:06:55.092260       1 main.go:227] handling current node
	I1005 20:07:05.102768       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:07:05.102797       1 main.go:227] handling current node
	I1005 20:07:15.107542       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:07:15.107569       1 main.go:227] handling current node
	I1005 20:07:25.119014       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:07:25.119042       1 main.go:227] handling current node
	I1005 20:07:35.123464       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:07:35.123490       1 main.go:227] handling current node
	I1005 20:07:45.130797       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:07:45.130827       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [5f7b78b6e6e33b46d7955aa33533831aa7c215cfedba09670222db8b9975de3d] <==
	* E1005 20:05:46.239721       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.26:55408: read: connection reset by peer
	I1005 20:05:48.325648       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1005 20:06:26.710090       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1005 20:06:46.913119       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1005 20:07:02.391981       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 20:07:02.392030       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 20:07:02.399405       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 20:07:02.399602       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 20:07:02.407584       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 20:07:02.407642       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 20:07:02.408766       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 20:07:02.408815       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 20:07:02.418308       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 20:07:02.418378       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 20:07:02.423903       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 20:07:02.423958       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 20:07:02.439628       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 20:07:02.439840       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 20:07:02.442726       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 20:07:02.442933       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1005 20:07:03.409299       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1005 20:07:03.443228       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1005 20:07:03.528212       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1005 20:07:43.793859       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.140.37"}
	E1005 20:07:46.395865       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [350933d7ea4109c1201fc015e339a3f15df2aa3caf6abba6e492814e3370f3d3] <==
	* W1005 20:07:20.304614       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 20:07:20.304648       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 20:07:22.976114       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 20:07:22.976157       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 20:07:23.771395       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 20:07:23.771439       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 20:07:24.858539       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 20:07:24.858577       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 20:07:39.694178       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 20:07:39.694215       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 20:07:40.311729       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 20:07:40.311774       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1005 20:07:43.621976       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1005 20:07:43.633512       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-sbjd2"
	I1005 20:07:43.639789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="17.994862ms"
	I1005 20:07:43.645923       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.973247ms"
	I1005 20:07:43.646045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.007µs"
	I1005 20:07:43.652704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="84.083µs"
	I1005 20:07:45.748960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.72751ms"
	I1005 20:07:45.749067       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="63.24µs"
	I1005 20:07:46.328115       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1005 20:07:46.330127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="6.185µs"
	I1005 20:07:46.333327       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1005 20:07:49.786943       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 20:07:49.786980       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [4f04c8be271d2caaf6323be99b2cf632cdca1d94795e6e82799d4f43edbfbe79] <==
	* I1005 20:04:04.465247       1 server_others.go:69] "Using iptables proxy"
	I1005 20:04:04.533606       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1005 20:04:05.139346       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1005 20:04:05.327723       1 server_others.go:152] "Using iptables Proxier"
	I1005 20:04:05.327888       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1005 20:04:05.327930       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1005 20:04:05.328045       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1005 20:04:05.328347       1 server.go:846] "Version info" version="v1.28.2"
	I1005 20:04:05.328886       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 20:04:05.330069       1 config.go:188] "Starting service config controller"
	I1005 20:04:05.432184       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1005 20:04:05.432288       1 shared_informer.go:318] Caches are synced for service config
	I1005 20:04:05.331401       1 config.go:97] "Starting endpoint slice config controller"
	I1005 20:04:05.432350       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1005 20:04:05.432374       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1005 20:04:05.331915       1 config.go:315] "Starting node config controller"
	I1005 20:04:05.432489       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1005 20:04:05.432512       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [37a4f09ac77c02bf4d6df7dca5647ee77b30f7329e7351c95b7e6dacfe1fe111] <==
	* W1005 20:03:48.541095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 20:03:48.541121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1005 20:03:48.541165       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 20:03:48.541183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1005 20:03:48.541272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1005 20:03:48.541293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1005 20:03:48.541529       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1005 20:03:48.541557       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1005 20:03:48.541610       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 20:03:48.541624       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1005 20:03:48.541636       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 20:03:48.541645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1005 20:03:49.477627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 20:03:49.477780       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1005 20:03:49.496650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 20:03:49.496693       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1005 20:03:49.546959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 20:03:49.546993       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1005 20:03:49.578526       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1005 20:03:49.578565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1005 20:03:49.589965       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 20:03:49.590015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1005 20:03:49.664656       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1005 20:03:49.664703       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1005 20:03:50.035848       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 05 20:07:45 addons-029116 kubelet[1557]: I1005 20:07:45.731157    1557 scope.go:117] "RemoveContainer" containerID="a1e9bbff9f21879c47229763700bbc97a1cbb6da3454d27132c093bd345c7e24"
	Oct 05 20:07:45 addons-029116 kubelet[1557]: I1005 20:07:45.742015    1557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-sbjd2" podStartSLOduration=2.018838594 podCreationTimestamp="2023-10-05 20:07:43 +0000 UTC" firstStartedPulling="2023-10-05 20:07:44.124032543 +0000 UTC m=+232.753811886" lastFinishedPulling="2023-10-05 20:07:44.847149252 +0000 UTC m=+233.476928593" observedRunningTime="2023-10-05 20:07:45.741140323 +0000 UTC m=+234.370919676" watchObservedRunningTime="2023-10-05 20:07:45.741955301 +0000 UTC m=+234.371734651"
	Oct 05 20:07:45 addons-029116 kubelet[1557]: I1005 20:07:45.751230    1557 scope.go:117] "RemoveContainer" containerID="a1e9bbff9f21879c47229763700bbc97a1cbb6da3454d27132c093bd345c7e24"
	Oct 05 20:07:45 addons-029116 kubelet[1557]: E1005 20:07:45.751800    1557 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1e9bbff9f21879c47229763700bbc97a1cbb6da3454d27132c093bd345c7e24\": container with ID starting with a1e9bbff9f21879c47229763700bbc97a1cbb6da3454d27132c093bd345c7e24 not found: ID does not exist" containerID="a1e9bbff9f21879c47229763700bbc97a1cbb6da3454d27132c093bd345c7e24"
	Oct 05 20:07:45 addons-029116 kubelet[1557]: I1005 20:07:45.751868    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1e9bbff9f21879c47229763700bbc97a1cbb6da3454d27132c093bd345c7e24"} err="failed to get container status \"a1e9bbff9f21879c47229763700bbc97a1cbb6da3454d27132c093bd345c7e24\": rpc error: code = NotFound desc = could not find container \"a1e9bbff9f21879c47229763700bbc97a1cbb6da3454d27132c093bd345c7e24\": container with ID starting with a1e9bbff9f21879c47229763700bbc97a1cbb6da3454d27132c093bd345c7e24 not found: ID does not exist"
	Oct 05 20:07:47 addons-029116 kubelet[1557]: I1005 20:07:47.532430    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="210e60cb-c9cb-4971-b48e-368985be2277" path="/var/lib/kubelet/pods/210e60cb-c9cb-4971-b48e-368985be2277/volumes"
	Oct 05 20:07:47 addons-029116 kubelet[1557]: I1005 20:07:47.532784    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81e38311-d422-4260-9cff-9ba9633b6215" path="/var/lib/kubelet/pods/81e38311-d422-4260-9cff-9ba9633b6215/volumes"
	Oct 05 20:07:47 addons-029116 kubelet[1557]: I1005 20:07:47.533072    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a9ceb821-50a1-4974-8fcb-fc084ab97227" path="/var/lib/kubelet/pods/a9ceb821-50a1-4974-8fcb-fc084ab97227/volumes"
	Oct 05 20:07:49 addons-029116 kubelet[1557]: I1005 20:07:49.742439    1557 scope.go:117] "RemoveContainer" containerID="41c7c038b90d4a5f789a53fbd1e7c766b8be09d8f62bc2c8b241622f0cbb7ec6"
	Oct 05 20:07:49 addons-029116 kubelet[1557]: I1005 20:07:49.760016    1557 scope.go:117] "RemoveContainer" containerID="41c7c038b90d4a5f789a53fbd1e7c766b8be09d8f62bc2c8b241622f0cbb7ec6"
	Oct 05 20:07:49 addons-029116 kubelet[1557]: E1005 20:07:49.760484    1557 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c7c038b90d4a5f789a53fbd1e7c766b8be09d8f62bc2c8b241622f0cbb7ec6\": container with ID starting with 41c7c038b90d4a5f789a53fbd1e7c766b8be09d8f62bc2c8b241622f0cbb7ec6 not found: ID does not exist" containerID="41c7c038b90d4a5f789a53fbd1e7c766b8be09d8f62bc2c8b241622f0cbb7ec6"
	Oct 05 20:07:49 addons-029116 kubelet[1557]: I1005 20:07:49.760525    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c7c038b90d4a5f789a53fbd1e7c766b8be09d8f62bc2c8b241622f0cbb7ec6"} err="failed to get container status \"41c7c038b90d4a5f789a53fbd1e7c766b8be09d8f62bc2c8b241622f0cbb7ec6\": rpc error: code = NotFound desc = could not find container \"41c7c038b90d4a5f789a53fbd1e7c766b8be09d8f62bc2c8b241622f0cbb7ec6\": container with ID starting with 41c7c038b90d4a5f789a53fbd1e7c766b8be09d8f62bc2c8b241622f0cbb7ec6 not found: ID does not exist"
	Oct 05 20:07:49 addons-029116 kubelet[1557]: I1005 20:07:49.763817    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d04e5438-a507-4e88-ab77-1dec8cef310d-webhook-cert\") pod \"d04e5438-a507-4e88-ab77-1dec8cef310d\" (UID: \"d04e5438-a507-4e88-ab77-1dec8cef310d\") "
	Oct 05 20:07:49 addons-029116 kubelet[1557]: I1005 20:07:49.763871    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr9bd\" (UniqueName: \"kubernetes.io/projected/d04e5438-a507-4e88-ab77-1dec8cef310d-kube-api-access-sr9bd\") pod \"d04e5438-a507-4e88-ab77-1dec8cef310d\" (UID: \"d04e5438-a507-4e88-ab77-1dec8cef310d\") "
	Oct 05 20:07:49 addons-029116 kubelet[1557]: I1005 20:07:49.765906    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d04e5438-a507-4e88-ab77-1dec8cef310d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d04e5438-a507-4e88-ab77-1dec8cef310d" (UID: "d04e5438-a507-4e88-ab77-1dec8cef310d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 20:07:49 addons-029116 kubelet[1557]: I1005 20:07:49.765914    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04e5438-a507-4e88-ab77-1dec8cef310d-kube-api-access-sr9bd" (OuterVolumeSpecName: "kube-api-access-sr9bd") pod "d04e5438-a507-4e88-ab77-1dec8cef310d" (UID: "d04e5438-a507-4e88-ab77-1dec8cef310d"). InnerVolumeSpecName "kube-api-access-sr9bd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 05 20:07:49 addons-029116 kubelet[1557]: I1005 20:07:49.864242    1557 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d04e5438-a507-4e88-ab77-1dec8cef310d-webhook-cert\") on node \"addons-029116\" DevicePath \"\""
	Oct 05 20:07:49 addons-029116 kubelet[1557]: I1005 20:07:49.864292    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sr9bd\" (UniqueName: \"kubernetes.io/projected/d04e5438-a507-4e88-ab77-1dec8cef310d-kube-api-access-sr9bd\") on node \"addons-029116\" DevicePath \"\""
	Oct 05 20:07:51 addons-029116 kubelet[1557]: I1005 20:07:51.531816    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d04e5438-a507-4e88-ab77-1dec8cef310d" path="/var/lib/kubelet/pods/d04e5438-a507-4e88-ab77-1dec8cef310d/volumes"
	Oct 05 20:07:51 addons-029116 kubelet[1557]: E1005 20:07:51.661167    1557 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/320cb80f01d13c77935c8a00bc58b10aab9513edb0d80580155b9b50429061e5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/320cb80f01d13c77935c8a00bc58b10aab9513edb0d80580155b9b50429061e5/diff: no such file or directory, extraDiskErr: <nil>
	Oct 05 20:07:51 addons-029116 kubelet[1557]: E1005 20:07:51.663425    1557 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/320cb80f01d13c77935c8a00bc58b10aab9513edb0d80580155b9b50429061e5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/320cb80f01d13c77935c8a00bc58b10aab9513edb0d80580155b9b50429061e5/diff: no such file or directory, extraDiskErr: <nil>
	Oct 05 20:07:51 addons-029116 kubelet[1557]: E1005 20:07:51.829629    1557 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c53ef73d6ea2ecde2fa0a7c1924931917b3721bc77a82aabded52bd887216283/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c53ef73d6ea2ecde2fa0a7c1924931917b3721bc77a82aabded52bd887216283/diff: no such file or directory, extraDiskErr: <nil>
	Oct 05 20:07:51 addons-029116 kubelet[1557]: E1005 20:07:51.946992    1557 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c53ef73d6ea2ecde2fa0a7c1924931917b3721bc77a82aabded52bd887216283/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c53ef73d6ea2ecde2fa0a7c1924931917b3721bc77a82aabded52bd887216283/diff: no such file or directory, extraDiskErr: <nil>
	Oct 05 20:07:52 addons-029116 kubelet[1557]: I1005 20:07:52.530008    1557 scope.go:117] "RemoveContainer" containerID="8b07d913f3827b091d2af43dcbce1cdc95c8ac749b853dd0a6aee54babde2e46"
	Oct 05 20:07:52 addons-029116 kubelet[1557]: I1005 20:07:52.549011    1557 scope.go:117] "RemoveContainer" containerID="9dfe0e525f171a00139f2c4a0db206083e58ce31cf002cc76d9dc48c3f0a190a"
	
	* 
	* ==> storage-provisioner [650c4b293c0e7b0246d75cd3c3e38a3e6cdfe6475ae6413a8aaa802ec5121577] <==
	* I1005 20:04:37.125288       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1005 20:04:37.141405       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1005 20:04:37.141494       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1005 20:04:37.222506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1005 20:04:37.222700       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-029116_b65f11ee-3cf0-41dd-aa3e-e160ac35b029!
	I1005 20:04:37.222706       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"348d621e-506f-4f8f-8b4d-7a625c192fc8", APIVersion:"v1", ResourceVersion:"851", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-029116_b65f11ee-3cf0-41dd-aa3e-e160ac35b029 became leader
	I1005 20:04:37.323016       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-029116_b65f11ee-3cf0-41dd-aa3e-e160ac35b029!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-029116 -n addons-029116
helpers_test.go:261: (dbg) Run:  kubectl --context addons-029116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-368978
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image load --daemon gcr.io/google-containers/addon-resizer:functional-368978 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 image load --daemon gcr.io/google-containers/addon-resizer:functional-368978 --alsologtostderr: (8.551990185s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 image ls: (2.267300564s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-368978" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.73s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (185.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:205: (dbg) Run:  kubectl --context ingress-addon-legacy-540731 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:205: (dbg) Done: kubectl --context ingress-addon-legacy-540731 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.972878841s)
addons_test.go:230: (dbg) Run:  kubectl --context ingress-addon-legacy-540731 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context ingress-addon-legacy-540731 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f0f11113-cd56-424f-a79a-2fd0883891b9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f0f11113-cd56-424f-a79a-2fd0883891b9] Running
addons_test.go:248: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.009219751s
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-540731 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1005 20:15:20.648359  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:15:48.332048  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-540731 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.274247492s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context ingress-addon-legacy-540731 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-540731 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1005 20:16:42.403144  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:16:42.408514  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:16:42.418863  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:16:42.439262  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:16:42.479562  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:16:42.560144  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:16:42.720593  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:16:43.041210  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:16:43.682184  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
addons_test.go:295: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.010494029s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:297: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:301: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-540731 addons disable ingress-dns --alsologtostderr -v=1
E1005 20:16:44.962626  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-540731 addons disable ingress-dns --alsologtostderr -v=1: (2.424179798s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-540731 addons disable ingress --alsologtostderr -v=1
E1005 20:16:47.523687  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:16:52.644915  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-540731 addons disable ingress --alsologtostderr -v=1: (7.478554889s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-540731
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-540731:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "21228f7bce93d8130112368e459036b5790c0f1e3167f9afb664e2a20921fa60",
	        "Created": "2023-10-05T20:12:48.341549575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 381065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T20:12:48.67083757Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/21228f7bce93d8130112368e459036b5790c0f1e3167f9afb664e2a20921fa60/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/21228f7bce93d8130112368e459036b5790c0f1e3167f9afb664e2a20921fa60/hostname",
	        "HostsPath": "/var/lib/docker/containers/21228f7bce93d8130112368e459036b5790c0f1e3167f9afb664e2a20921fa60/hosts",
	        "LogPath": "/var/lib/docker/containers/21228f7bce93d8130112368e459036b5790c0f1e3167f9afb664e2a20921fa60/21228f7bce93d8130112368e459036b5790c0f1e3167f9afb664e2a20921fa60-json.log",
	        "Name": "/ingress-addon-legacy-540731",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-540731:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-540731",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0813d2244862ca3c24adbd6b8a8f4b78ab0084f9c3902b0451822b9c6a6360f-init/diff:/var/lib/docker/overlay2/a21dd10b1c0943795b4df336c5f708b264590966562c18c6ecb8b8c4ccc3838e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0813d2244862ca3c24adbd6b8a8f4b78ab0084f9c3902b0451822b9c6a6360f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0813d2244862ca3c24adbd6b8a8f4b78ab0084f9c3902b0451822b9c6a6360f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0813d2244862ca3c24adbd6b8a8f4b78ab0084f9c3902b0451822b9c6a6360f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-540731",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-540731/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-540731",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-540731",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-540731",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c52a084651bbefc66602e7679644a3d4422e89f02fcbd4fa309c4379414bc88",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7c52a084651b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-540731": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "21228f7bce93",
	                        "ingress-addon-legacy-540731"
	                    ],
	                    "NetworkID": "d821627342eb64bf4f50f35dadaee4a19639b81a40411c9d2c827f1a7b008313",
	                    "EndpointID": "18b6b6a1e4f1782e90315f64103d5c9cc05a57d1ec4f406814b7e53fc37a6dd6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-540731 -n ingress-addon-legacy-540731
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-540731 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-540731 logs -n 25: (1.164331265s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-368978                                                            | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-368978                                                            | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-368978                                                            | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| image          | functional-368978 image ls                                                   | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	| image          | functional-368978 image save                                                 | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-368978                     |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-368978 image rm                                                   | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-368978                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-368978 image ls                                                   | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	| image          | functional-368978 image load                                                 | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-368978 image ls                                                   | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	| image          | functional-368978 image save --daemon                                        | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-368978                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-368978                                                            | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | image ls --format short                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-368978                                                            | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | image ls --format json                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-368978                                                            | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh            | functional-368978 ssh pgrep                                                  | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC |                     |
	|                | buildkitd                                                                    |                             |         |         |                     |                     |
	| image          | functional-368978                                                            | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | image ls --format table                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-368978 image build -t                                             | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	|                | localhost/my-image:functional-368978                                         |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image          | functional-368978 image ls                                                   | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	| delete         | -p functional-368978                                                         | functional-368978           | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:12 UTC |
	| start          | -p ingress-addon-legacy-540731                                               | ingress-addon-legacy-540731 | jenkins | v1.31.2 | 05 Oct 23 20:12 UTC | 05 Oct 23 20:13 UTC |
	|                | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                     |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-540731                                                  | ingress-addon-legacy-540731 | jenkins | v1.31.2 | 05 Oct 23 20:13 UTC | 05 Oct 23 20:13 UTC |
	|                | addons enable ingress                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-540731                                                  | ingress-addon-legacy-540731 | jenkins | v1.31.2 | 05 Oct 23 20:13 UTC | 05 Oct 23 20:13 UTC |
	|                | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-540731                                                  | ingress-addon-legacy-540731 | jenkins | v1.31.2 | 05 Oct 23 20:14 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-540731 ip                                               | ingress-addon-legacy-540731 | jenkins | v1.31.2 | 05 Oct 23 20:16 UTC | 05 Oct 23 20:16 UTC |
	| addons         | ingress-addon-legacy-540731                                                  | ingress-addon-legacy-540731 | jenkins | v1.31.2 | 05 Oct 23 20:16 UTC | 05 Oct 23 20:16 UTC |
	|                | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-540731                                                  | ingress-addon-legacy-540731 | jenkins | v1.31.2 | 05 Oct 23 20:16 UTC | 05 Oct 23 20:16 UTC |
	|                | addons disable ingress                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:12:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:12:34.919641  380432 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:12:34.919772  380432 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:12:34.919781  380432 out.go:309] Setting ErrFile to fd 2...
	I1005 20:12:34.919786  380432 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:12:34.919999  380432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	I1005 20:12:34.920710  380432 out.go:303] Setting JSON to false
	I1005 20:12:34.922474  380432 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6884,"bootTime":1696529871,"procs":284,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:12:34.922569  380432 start.go:138] virtualization: kvm guest
	I1005 20:12:34.924921  380432 out.go:177] * [ingress-addon-legacy-540731] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:12:34.926619  380432 notify.go:220] Checking for updates...
	I1005 20:12:34.926637  380432 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:12:34.928340  380432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:12:34.929912  380432 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:12:34.931394  380432 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	I1005 20:12:34.932918  380432 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:12:34.934421  380432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:12:34.936075  380432 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:12:34.962215  380432 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:12:34.962350  380432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:12:35.025971  380432 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-10-05 20:12:35.016759713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:12:35.026096  380432 docker.go:294] overlay module found
	I1005 20:12:35.028275  380432 out.go:177] * Using the docker driver based on user configuration
	I1005 20:12:35.029669  380432 start.go:298] selected driver: docker
	I1005 20:12:35.029685  380432 start.go:902] validating driver "docker" against <nil>
	I1005 20:12:35.029707  380432 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:12:35.030584  380432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:12:35.087013  380432 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-10-05 20:12:35.077343448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:12:35.087243  380432 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 20:12:35.087469  380432 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1005 20:12:35.089574  380432 out.go:177] * Using Docker driver with root privileges
	I1005 20:12:35.091151  380432 cni.go:84] Creating CNI manager for ""
	I1005 20:12:35.091182  380432 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 20:12:35.091197  380432 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 20:12:35.091213  380432 start_flags.go:321] config:
	{Name:ingress-addon-legacy-540731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-540731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:12:35.092968  380432 out.go:177] * Starting control plane node ingress-addon-legacy-540731 in cluster ingress-addon-legacy-540731
	I1005 20:12:35.094437  380432 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 20:12:35.095809  380432 out.go:177] * Pulling base image ...
	I1005 20:12:35.097116  380432 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1005 20:12:35.097150  380432 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:12:35.114834  380432 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 20:12:35.114864  380432 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 20:12:35.129882  380432 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1005 20:12:35.129911  380432 cache.go:57] Caching tarball of preloaded images
	I1005 20:12:35.130078  380432 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1005 20:12:35.132122  380432 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1005 20:12:35.133629  380432 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1005 20:12:35.169670  380432 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1005 20:12:39.815206  380432 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1005 20:12:39.815309  380432 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1005 20:12:40.840249  380432 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1005 20:12:40.840670  380432 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/config.json ...
	I1005 20:12:40.840711  380432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/config.json: {Name:mk29387d245ad51a1b4020f91c7be04ad3510c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:12:40.840959  380432 cache.go:195] Successfully downloaded all kic artifacts
	I1005 20:12:40.840989  380432 start.go:365] acquiring machines lock for ingress-addon-legacy-540731: {Name:mk60c04c0cbd24210f05794a367ec7dc67fdb671 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:12:40.841058  380432 start.go:369] acquired machines lock for "ingress-addon-legacy-540731" in 50.445µs
	I1005 20:12:40.841084  380432 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-540731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-540731 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 20:12:40.841191  380432 start.go:125] createHost starting for "" (driver="docker")
	I1005 20:12:40.844726  380432 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1005 20:12:40.845079  380432 start.go:159] libmachine.API.Create for "ingress-addon-legacy-540731" (driver="docker")
	I1005 20:12:40.845129  380432 client.go:168] LocalClient.Create starting
	I1005 20:12:40.845229  380432 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem
	I1005 20:12:40.845277  380432 main.go:141] libmachine: Decoding PEM data...
	I1005 20:12:40.845302  380432 main.go:141] libmachine: Parsing certificate...
	I1005 20:12:40.845368  380432 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem
	I1005 20:12:40.845396  380432 main.go:141] libmachine: Decoding PEM data...
	I1005 20:12:40.845417  380432 main.go:141] libmachine: Parsing certificate...
	I1005 20:12:40.845787  380432 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-540731 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 20:12:40.863529  380432 cli_runner.go:211] docker network inspect ingress-addon-legacy-540731 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 20:12:40.863627  380432 network_create.go:281] running [docker network inspect ingress-addon-legacy-540731] to gather additional debugging logs...
	I1005 20:12:40.863654  380432 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-540731
	W1005 20:12:40.880612  380432 cli_runner.go:211] docker network inspect ingress-addon-legacy-540731 returned with exit code 1
	I1005 20:12:40.880648  380432 network_create.go:284] error running [docker network inspect ingress-addon-legacy-540731]: docker network inspect ingress-addon-legacy-540731: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-540731 not found
	I1005 20:12:40.880668  380432 network_create.go:286] output of [docker network inspect ingress-addon-legacy-540731]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-540731 not found
	
	** /stderr **
	I1005 20:12:40.880810  380432 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:12:40.898671  380432 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e771f0}
	I1005 20:12:40.898713  380432 network_create.go:124] attempt to create docker network ingress-addon-legacy-540731 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1005 20:12:40.898779  380432 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-540731 ingress-addon-legacy-540731
	I1005 20:12:40.957225  380432 network_create.go:108] docker network ingress-addon-legacy-540731 192.168.49.0/24 created
	I1005 20:12:40.957275  380432 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-540731" container
	I1005 20:12:40.957362  380432 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 20:12:40.974581  380432 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-540731 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-540731 --label created_by.minikube.sigs.k8s.io=true
	I1005 20:12:40.993885  380432 oci.go:103] Successfully created a docker volume ingress-addon-legacy-540731
	I1005 20:12:40.993986  380432 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-540731-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-540731 --entrypoint /usr/bin/test -v ingress-addon-legacy-540731:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 20:12:42.767276  380432 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-540731-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-540731 --entrypoint /usr/bin/test -v ingress-addon-legacy-540731:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (1.773239737s)
	I1005 20:12:42.767315  380432 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-540731
	I1005 20:12:42.767331  380432 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1005 20:12:42.767354  380432 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 20:12:42.767414  380432 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-540731:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 20:12:48.269738  380432 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-540731:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (5.502261402s)
	I1005 20:12:48.269779  380432 kic.go:199] duration metric: took 5.502420 seconds to extract preloaded images to volume
	W1005 20:12:48.269953  380432 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 20:12:48.270080  380432 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 20:12:48.325350  380432 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-540731 --name ingress-addon-legacy-540731 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-540731 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-540731 --network ingress-addon-legacy-540731 --ip 192.168.49.2 --volume ingress-addon-legacy-540731:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 20:12:48.679711  380432 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-540731 --format={{.State.Running}}
	I1005 20:12:48.699365  380432 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-540731 --format={{.State.Status}}
	I1005 20:12:48.719227  380432 cli_runner.go:164] Run: docker exec ingress-addon-legacy-540731 stat /var/lib/dpkg/alternatives/iptables
	I1005 20:12:48.775652  380432 oci.go:144] the created container "ingress-addon-legacy-540731" has a running status.
	I1005 20:12:48.775699  380432 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/ingress-addon-legacy-540731/id_rsa...
	I1005 20:12:48.857057  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/ingress-addon-legacy-540731/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1005 20:12:48.857125  380432 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-334135/.minikube/machines/ingress-addon-legacy-540731/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 20:12:48.878836  380432 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-540731 --format={{.State.Status}}
	I1005 20:12:48.897785  380432 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 20:12:48.897809  380432 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-540731 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 20:12:48.969756  380432 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-540731 --format={{.State.Status}}
	I1005 20:12:48.989215  380432 machine.go:88] provisioning docker machine ...
	I1005 20:12:48.989254  380432 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-540731"
	I1005 20:12:48.989357  380432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-540731
	I1005 20:12:49.007910  380432 main.go:141] libmachine: Using SSH client type: native
	I1005 20:12:49.008396  380432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1005 20:12:49.008420  380432 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-540731 && echo "ingress-addon-legacy-540731" | sudo tee /etc/hostname
	I1005 20:12:49.009104  380432 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52332->127.0.0.1:33089: read: connection reset by peer
	I1005 20:12:52.159341  380432 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-540731
	
	I1005 20:12:52.159440  380432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-540731
	I1005 20:12:52.176973  380432 main.go:141] libmachine: Using SSH client type: native
	I1005 20:12:52.177322  380432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1005 20:12:52.177342  380432 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-540731' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-540731/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-540731' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 20:12:52.311466  380432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:12:52.311499  380432 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-334135/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-334135/.minikube}
	I1005 20:12:52.311527  380432 ubuntu.go:177] setting up certificates
	I1005 20:12:52.311544  380432 provision.go:83] configureAuth start
	I1005 20:12:52.311606  380432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-540731
	I1005 20:12:52.332004  380432 provision.go:138] copyHostCerts
	I1005 20:12:52.332055  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem
	I1005 20:12:52.332089  380432 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem, removing ...
	I1005 20:12:52.332099  380432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem
	I1005 20:12:52.332178  380432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem (1078 bytes)
	I1005 20:12:52.332255  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem
	I1005 20:12:52.332273  380432 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem, removing ...
	I1005 20:12:52.332282  380432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem
	I1005 20:12:52.332307  380432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem (1123 bytes)
	I1005 20:12:52.332350  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem
	I1005 20:12:52.332366  380432 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem, removing ...
	I1005 20:12:52.332374  380432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem
	I1005 20:12:52.332397  380432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem (1675 bytes)
	I1005 20:12:52.332441  380432 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-540731 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-540731]
	I1005 20:12:52.474937  380432 provision.go:172] copyRemoteCerts
	I1005 20:12:52.475006  380432 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 20:12:52.475049  380432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-540731
	I1005 20:12:52.493335  380432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/ingress-addon-legacy-540731/id_rsa Username:docker}
	I1005 20:12:52.592404  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1005 20:12:52.592466  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1005 20:12:52.617196  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1005 20:12:52.617269  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 20:12:52.641971  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1005 20:12:52.642053  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1005 20:12:52.666889  380432 provision.go:86] duration metric: configureAuth took 355.32611ms
	I1005 20:12:52.666924  380432 ubuntu.go:193] setting minikube options for container-runtime
	I1005 20:12:52.667173  380432 config.go:182] Loaded profile config "ingress-addon-legacy-540731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1005 20:12:52.667320  380432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-540731
	I1005 20:12:52.685650  380432 main.go:141] libmachine: Using SSH client type: native
	I1005 20:12:52.685999  380432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1005 20:12:52.686016  380432 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 20:12:52.945465  380432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 20:12:52.945497  380432 machine.go:91] provisioned docker machine in 3.956258618s
	I1005 20:12:52.945507  380432 client.go:171] LocalClient.Create took 12.100366439s
	I1005 20:12:52.945530  380432 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-540731" took 12.100450956s
	I1005 20:12:52.945540  380432 start.go:300] post-start starting for "ingress-addon-legacy-540731" (driver="docker")
	I1005 20:12:52.945552  380432 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 20:12:52.945616  380432 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 20:12:52.945680  380432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-540731
	I1005 20:12:52.963880  380432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/ingress-addon-legacy-540731/id_rsa Username:docker}
	I1005 20:12:53.060698  380432 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 20:12:53.064415  380432 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 20:12:53.064452  380432 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 20:12:53.064461  380432 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 20:12:53.064469  380432 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 20:12:53.064482  380432 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/addons for local assets ...
	I1005 20:12:53.064546  380432 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/files for local assets ...
	I1005 20:12:53.064614  380432 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem -> 3409292.pem in /etc/ssl/certs
	I1005 20:12:53.064625  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem -> /etc/ssl/certs/3409292.pem
	I1005 20:12:53.064751  380432 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 20:12:53.073726  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem --> /etc/ssl/certs/3409292.pem (1708 bytes)
	I1005 20:12:53.098503  380432 start.go:303] post-start completed in 152.945681ms
	I1005 20:12:53.098861  380432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-540731
	I1005 20:12:53.116418  380432 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/config.json ...
	I1005 20:12:53.116696  380432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:12:53.116744  380432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-540731
	I1005 20:12:53.134475  380432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/ingress-addon-legacy-540731/id_rsa Username:docker}
	I1005 20:12:53.228584  380432 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 20:12:53.233496  380432 start.go:128] duration metric: createHost completed in 12.39228746s
	I1005 20:12:53.233526  380432 start.go:83] releasing machines lock for "ingress-addon-legacy-540731", held for 12.39245381s
	I1005 20:12:53.233615  380432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-540731
	I1005 20:12:53.252347  380432 ssh_runner.go:195] Run: cat /version.json
	I1005 20:12:53.252413  380432 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 20:12:53.252460  380432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-540731
	I1005 20:12:53.252416  380432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-540731
	I1005 20:12:53.272676  380432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/ingress-addon-legacy-540731/id_rsa Username:docker}
	I1005 20:12:53.272696  380432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/ingress-addon-legacy-540731/id_rsa Username:docker}
	I1005 20:12:53.455429  380432 ssh_runner.go:195] Run: systemctl --version
	I1005 20:12:53.460227  380432 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 20:12:53.600771  380432 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 20:12:53.605627  380432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:12:53.625848  380432 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 20:12:53.625949  380432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:12:53.656827  380432 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1005 20:12:53.656850  380432 start.go:469] detecting cgroup driver to use...
	I1005 20:12:53.656889  380432 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:12:53.656930  380432 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 20:12:53.673117  380432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 20:12:53.684736  380432 docker.go:197] disabling cri-docker service (if available) ...
	I1005 20:12:53.684807  380432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 20:12:53.698974  380432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 20:12:53.714075  380432 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 20:12:53.790782  380432 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 20:12:53.870804  380432 docker.go:213] disabling docker service ...
	I1005 20:12:53.870873  380432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 20:12:53.890746  380432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 20:12:53.902545  380432 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 20:12:53.982134  380432 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 20:12:54.063259  380432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 20:12:54.074823  380432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:12:54.091547  380432 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1005 20:12:54.091615  380432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:12:54.102083  380432 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1005 20:12:54.102156  380432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:12:54.112627  380432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:12:54.123161  380432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:12:54.133701  380432 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 20:12:54.143562  380432 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 20:12:54.152632  380432 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 20:12:54.161727  380432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:12:54.241217  380432 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1005 20:12:54.350530  380432 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1005 20:12:54.350605  380432 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1005 20:12:54.354476  380432 start.go:537] Will wait 60s for crictl version
	I1005 20:12:54.354544  380432 ssh_runner.go:195] Run: which crictl
	I1005 20:12:54.358102  380432 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 20:12:54.393891  380432 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1005 20:12:54.393993  380432 ssh_runner.go:195] Run: crio --version
	I1005 20:12:54.432093  380432 ssh_runner.go:195] Run: crio --version
	I1005 20:12:54.473591  380432 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1005 20:12:54.475088  380432 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-540731 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:12:54.492504  380432 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1005 20:12:54.496534  380432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:12:54.508433  380432 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1005 20:12:54.508491  380432 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 20:12:54.558086  380432 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1005 20:12:54.558160  380432 ssh_runner.go:195] Run: which lz4
	I1005 20:12:54.561804  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1005 20:12:54.561897  380432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1005 20:12:54.565433  380432 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1005 20:12:54.565465  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1005 20:12:55.660507  380432 crio.go:444] Took 1.098640 seconds to copy over tarball
	I1005 20:12:55.660578  380432 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1005 20:12:58.143835  380432 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.483226009s)
	I1005 20:12:58.143871  380432 crio.go:451] Took 2.483333 seconds to extract the tarball
	I1005 20:12:58.143883  380432 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1005 20:12:58.217474  380432 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 20:12:58.252983  380432 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1005 20:12:58.253013  380432 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1005 20:12:58.253107  380432 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:12:58.253121  380432 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 20:12:58.253133  380432 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1005 20:12:58.253146  380432 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1005 20:12:58.253172  380432 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1005 20:12:58.253125  380432 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1005 20:12:58.253109  380432 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1005 20:12:58.253277  380432 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1005 20:12:58.254495  380432 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1005 20:12:58.254498  380432 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1005 20:12:58.254655  380432 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:12:58.254671  380432 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1005 20:12:58.254671  380432 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1005 20:12:58.254671  380432 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1005 20:12:58.254703  380432 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1005 20:12:58.254970  380432 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 20:12:58.437034  380432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1005 20:12:58.461857  380432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1005 20:12:58.465991  380432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1005 20:12:58.468349  380432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1005 20:12:58.482868  380432 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1005 20:12:58.482922  380432 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1005 20:12:58.482973  380432 ssh_runner.go:195] Run: which crictl
	I1005 20:12:58.511258  380432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1005 20:12:58.514763  380432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1005 20:12:58.522638  380432 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1005 20:12:58.522695  380432 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1005 20:12:58.522744  380432 ssh_runner.go:195] Run: which crictl
	I1005 20:12:58.522765  380432 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1005 20:12:58.522786  380432 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1005 20:12:58.522820  380432 ssh_runner.go:195] Run: which crictl
	I1005 20:12:58.526306  380432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 20:12:58.531792  380432 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1005 20:12:58.531843  380432 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1005 20:12:58.531853  380432 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1005 20:12:58.531885  380432 ssh_runner.go:195] Run: which crictl
	I1005 20:12:58.562795  380432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:12:58.630617  380432 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1005 20:12:58.630677  380432 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1005 20:12:58.630736  380432 ssh_runner.go:195] Run: which crictl
	I1005 20:12:58.678998  380432 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1005 20:12:58.679054  380432 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1005 20:12:58.679096  380432 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1005 20:12:58.679110  380432 ssh_runner.go:195] Run: which crictl
	I1005 20:12:58.679150  380432 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1005 20:12:58.679206  380432 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1005 20:12:58.679247  380432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1005 20:12:58.679241  380432 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 20:12:58.679273  380432 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1005 20:12:58.679284  380432 ssh_runner.go:195] Run: which crictl
	I1005 20:12:58.762612  380432 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1005 20:12:58.762636  380432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1005 20:12:58.762702  380432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1005 20:12:58.762739  380432 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 20:12:58.762774  380432 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1005 20:12:58.762845  380432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1005 20:12:58.831329  380432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1005 20:12:58.831459  380432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1005 20:12:58.835694  380432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1005 20:12:58.835756  380432 cache_images.go:92] LoadImages completed in 582.730564ms
	W1005 20:12:58.835835  380432 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1005 20:12:58.835944  380432 ssh_runner.go:195] Run: crio config
	I1005 20:12:58.882705  380432 cni.go:84] Creating CNI manager for ""
	I1005 20:12:58.882732  380432 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 20:12:58.882754  380432 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 20:12:58.882777  380432 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-540731 NodeName:ingress-addon-legacy-540731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1005 20:12:58.882976  380432 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-540731"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 20:12:58.883107  380432 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-540731 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-540731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 20:12:58.883181  380432 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1005 20:12:58.892497  380432 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 20:12:58.892583  380432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 20:12:58.902016  380432 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1005 20:12:58.921133  380432 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1005 20:12:58.939562  380432 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1005 20:12:58.957944  380432 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1005 20:12:58.962071  380432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:12:58.973498  380432 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731 for IP: 192.168.49.2
	I1005 20:12:58.973557  380432 certs.go:190] acquiring lock for shared ca certs: {Name:mk1be6ef34f8fc4cfa2ec636f9e6906c15e2096a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:12:58.973719  380432 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key
	I1005 20:12:58.973768  380432 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key
	I1005 20:12:58.973819  380432 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.key
	I1005 20:12:58.973832  380432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt with IP's: []
	I1005 20:12:59.059025  380432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt ...
	I1005 20:12:59.059059  380432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: {Name:mk3eaf08b40eeab710e359b9a35db3e03b1ac2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:12:59.059286  380432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.key ...
	I1005 20:12:59.059302  380432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.key: {Name:mk80a9e29444c268fdab4297c3aa5b4515713b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:12:59.059394  380432 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.key.dd3b5fb2
	I1005 20:12:59.059410  380432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1005 20:12:59.159377  380432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.crt.dd3b5fb2 ...
	I1005 20:12:59.159416  380432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.crt.dd3b5fb2: {Name:mke22e5da50b2d74488121e843da73120355dba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:12:59.159600  380432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.key.dd3b5fb2 ...
	I1005 20:12:59.159612  380432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.key.dd3b5fb2: {Name:mkd2c2e6f0a1f1a8ab794cb8d49915623a2f3fef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:12:59.159684  380432 certs.go:337] copying /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.crt
	I1005 20:12:59.159756  380432 certs.go:341] copying /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.key
	I1005 20:12:59.159808  380432 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/proxy-client.key
	I1005 20:12:59.159825  380432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/proxy-client.crt with IP's: []
	I1005 20:12:59.361898  380432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/proxy-client.crt ...
	I1005 20:12:59.361936  380432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/proxy-client.crt: {Name:mk26de00181e1f0a76e1c354ee45347470382452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:12:59.362112  380432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/proxy-client.key ...
	I1005 20:12:59.362123  380432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/proxy-client.key: {Name:mkc17ca0364c8a7984827f7a5600246c8de981f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:12:59.362196  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1005 20:12:59.362214  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1005 20:12:59.362225  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1005 20:12:59.362237  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1005 20:12:59.362255  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1005 20:12:59.362270  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1005 20:12:59.362283  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1005 20:12:59.362295  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1005 20:12:59.362356  380432 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929.pem (1338 bytes)
	W1005 20:12:59.362393  380432 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929_empty.pem, impossibly tiny 0 bytes
	I1005 20:12:59.362404  380432 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 20:12:59.362432  380432 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem (1078 bytes)
	I1005 20:12:59.362454  380432 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem (1123 bytes)
	I1005 20:12:59.362476  380432 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem (1675 bytes)
	I1005 20:12:59.362513  380432 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem (1708 bytes)
	I1005 20:12:59.362545  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929.pem -> /usr/share/ca-certificates/340929.pem
	I1005 20:12:59.362558  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem -> /usr/share/ca-certificates/3409292.pem
	I1005 20:12:59.362571  380432 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:12:59.363252  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 20:12:59.388214  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1005 20:12:59.413042  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 20:12:59.438119  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1005 20:12:59.463478  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 20:12:59.488304  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1005 20:12:59.513267  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 20:12:59.538135  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 20:12:59.562822  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929.pem --> /usr/share/ca-certificates/340929.pem (1338 bytes)
	I1005 20:12:59.587983  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem --> /usr/share/ca-certificates/3409292.pem (1708 bytes)
	I1005 20:12:59.612689  380432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 20:12:59.637287  380432 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 20:12:59.655809  380432 ssh_runner.go:195] Run: openssl version
	I1005 20:12:59.661605  380432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3409292.pem && ln -fs /usr/share/ca-certificates/3409292.pem /etc/ssl/certs/3409292.pem"
	I1005 20:12:59.671727  380432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3409292.pem
	I1005 20:12:59.675484  380432 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 20:09 /usr/share/ca-certificates/3409292.pem
	I1005 20:12:59.675556  380432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3409292.pem
	I1005 20:12:59.682667  380432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3409292.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 20:12:59.692831  380432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 20:12:59.702824  380432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:12:59.706828  380432 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:12:59.706900  380432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:12:59.713964  380432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 20:12:59.724177  380432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340929.pem && ln -fs /usr/share/ca-certificates/340929.pem /etc/ssl/certs/340929.pem"
	I1005 20:12:59.734159  380432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340929.pem
	I1005 20:12:59.738218  380432 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 20:09 /usr/share/ca-certificates/340929.pem
	I1005 20:12:59.738279  380432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340929.pem
	I1005 20:12:59.745445  380432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340929.pem /etc/ssl/certs/51391683.0"
	I1005 20:12:59.755452  380432 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 20:12:59.759230  380432 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 20:12:59.759294  380432 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-540731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-540731 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:12:59.759411  380432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1005 20:12:59.759472  380432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 20:12:59.797000  380432 cri.go:89] found id: ""
	I1005 20:12:59.797085  380432 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 20:12:59.806157  380432 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 20:12:59.815448  380432 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1005 20:12:59.815523  380432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 20:12:59.824604  380432 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 20:12:59.824664  380432 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1005 20:12:59.871635  380432 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1005 20:12:59.871759  380432 kubeadm.go:322] [preflight] Running pre-flight checks
	I1005 20:12:59.914823  380432 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1005 20:12:59.914911  380432 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-gcp
	I1005 20:12:59.914950  380432 kubeadm.go:322] OS: Linux
	I1005 20:12:59.914989  380432 kubeadm.go:322] CGROUPS_CPU: enabled
	I1005 20:12:59.915033  380432 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1005 20:12:59.915104  380432 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1005 20:12:59.915164  380432 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1005 20:12:59.915233  380432 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1005 20:12:59.915290  380432 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1005 20:12:59.987871  380432 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 20:12:59.988018  380432 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 20:12:59.988172  380432 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 20:13:00.190094  380432 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 20:13:00.191079  380432 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 20:13:00.191170  380432 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1005 20:13:00.265724  380432 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 20:13:00.267803  380432 out.go:204]   - Generating certificates and keys ...
	I1005 20:13:00.267947  380432 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1005 20:13:00.268068  380432 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1005 20:13:00.391762  380432 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 20:13:00.490525  380432 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1005 20:13:00.569076  380432 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1005 20:13:00.649485  380432 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1005 20:13:00.952718  380432 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1005 20:13:00.952869  380432 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-540731 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 20:13:01.095802  380432 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1005 20:13:01.095953  380432 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-540731 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 20:13:01.351470  380432 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 20:13:01.669202  380432 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 20:13:01.793694  380432 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1005 20:13:01.793795  380432 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 20:13:01.943161  380432 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 20:13:01.997015  380432 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 20:13:02.331951  380432 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 20:13:02.544426  380432 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 20:13:02.545074  380432 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 20:13:02.547976  380432 out.go:204]   - Booting up control plane ...
	I1005 20:13:02.548140  380432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 20:13:02.551411  380432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 20:13:02.554550  380432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 20:13:02.556005  380432 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 20:13:02.558519  380432 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 20:13:09.060752  380432 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502329 seconds
	I1005 20:13:09.060936  380432 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 20:13:09.073489  380432 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 20:13:09.595595  380432 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1005 20:13:09.595754  380432 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-540731 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1005 20:13:10.102996  380432 kubeadm.go:322] [bootstrap-token] Using token: o1damy.5ggshuxza4ftejiw
	I1005 20:13:10.104619  380432 out.go:204]   - Configuring RBAC rules ...
	I1005 20:13:10.104779  380432 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 20:13:10.108782  380432 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 20:13:10.116518  380432 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 20:13:10.119000  380432 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 20:13:10.121799  380432 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 20:13:10.124369  380432 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 20:13:10.133522  380432 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 20:13:10.426343  380432 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1005 20:13:10.535126  380432 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1005 20:13:10.536241  380432 kubeadm.go:322] 
	I1005 20:13:10.536354  380432 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1005 20:13:10.536374  380432 kubeadm.go:322] 
	I1005 20:13:10.536474  380432 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1005 20:13:10.536489  380432 kubeadm.go:322] 
	I1005 20:13:10.536546  380432 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1005 20:13:10.536658  380432 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 20:13:10.536744  380432 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 20:13:10.536754  380432 kubeadm.go:322] 
	I1005 20:13:10.536821  380432 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1005 20:13:10.536939  380432 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 20:13:10.537034  380432 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 20:13:10.537047  380432 kubeadm.go:322] 
	I1005 20:13:10.537143  380432 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1005 20:13:10.537226  380432 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1005 20:13:10.537234  380432 kubeadm.go:322] 
	I1005 20:13:10.537341  380432 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token o1damy.5ggshuxza4ftejiw \
	I1005 20:13:10.537478  380432 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:af54c40b34df9aa62a3cf1403ac0941464ca2ce3fa61291d1928dbb7869129bb \
	I1005 20:13:10.537514  380432 kubeadm.go:322]     --control-plane 
	I1005 20:13:10.537524  380432 kubeadm.go:322] 
	I1005 20:13:10.537631  380432 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1005 20:13:10.537642  380432 kubeadm.go:322] 
	I1005 20:13:10.537745  380432 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token o1damy.5ggshuxza4ftejiw \
	I1005 20:13:10.537910  380432 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:af54c40b34df9aa62a3cf1403ac0941464ca2ce3fa61291d1928dbb7869129bb 
	I1005 20:13:10.539793  380432 kubeadm.go:322] W1005 20:12:59.871012    1385 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1005 20:13:10.540104  380432 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-gcp\n", err: exit status 1
	I1005 20:13:10.540194  380432 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 20:13:10.540358  380432 kubeadm.go:322] W1005 20:13:02.551041    1385 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1005 20:13:10.540528  380432 kubeadm.go:322] W1005 20:13:02.554382    1385 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1005 20:13:10.540571  380432 cni.go:84] Creating CNI manager for ""
	I1005 20:13:10.540586  380432 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 20:13:10.542919  380432 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1005 20:13:10.544657  380432 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 20:13:10.549271  380432 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1005 20:13:10.549297  380432 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 20:13:10.568441  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 20:13:11.038413  380432 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 20:13:11.038489  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:11.038493  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=ingress-addon-legacy-540731 minikube.k8s.io/updated_at=2023_10_05T20_13_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:11.046622  380432 ops.go:34] apiserver oom_adj: -16
	I1005 20:13:11.164980  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:11.260530  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:11.833217  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:12.333512  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:12.833278  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:13.333393  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:13.832689  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:14.332601  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:14.833072  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:15.332656  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:15.833606  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:16.333119  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:16.832957  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:17.332768  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:17.832986  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:18.333006  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:18.833613  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:19.333311  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:19.833596  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:20.333424  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:20.833009  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:21.332833  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:21.832629  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:22.333627  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:22.833345  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:23.332746  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:23.832697  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:24.332794  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:24.832972  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:25.333482  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:25.832621  380432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:13:25.904513  380432 kubeadm.go:1081] duration metric: took 14.866088653s to wait for elevateKubeSystemPrivileges.
	I1005 20:13:25.904559  380432 kubeadm.go:406] StartCluster complete in 26.145271983s
	I1005 20:13:25.904586  380432 settings.go:142] acquiring lock: {Name:mk6ed3422387c6b56e20ba6eb900649f1c8038d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:13:25.904672  380432 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:13:25.905495  380432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/kubeconfig: {Name:mk99d37d95bb8af0e1f4fc14f039efe68f627fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:13:25.905767  380432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 20:13:25.905894  380432 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 20:13:25.905980  380432 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-540731"
	I1005 20:13:25.906024  380432 config.go:182] Loaded profile config "ingress-addon-legacy-540731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1005 20:13:25.906047  380432 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-540731"
	I1005 20:13:25.906020  380432 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-540731"
	I1005 20:13:25.906110  380432 host.go:66] Checking if "ingress-addon-legacy-540731" exists ...
	I1005 20:13:25.906111  380432 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-540731"
	I1005 20:13:25.906544  380432 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-540731 --format={{.State.Status}}
	I1005 20:13:25.906484  380432 kapi.go:59] client config for ingress-addon-legacy-540731: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.key", CAFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bfbf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 20:13:25.906686  380432 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-540731 --format={{.State.Status}}
	I1005 20:13:25.907324  380432 cert_rotation.go:137] Starting client certificate rotation controller
	I1005 20:13:25.930845  380432 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-540731" context rescaled to 1 replicas
	I1005 20:13:25.930910  380432 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 20:13:25.933914  380432 out.go:177] * Verifying Kubernetes components...
	I1005 20:13:25.935592  380432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:13:25.936990  380432 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:13:25.938883  380432 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:13:25.938918  380432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 20:13:25.939001  380432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-540731
	I1005 20:13:25.937723  380432 kapi.go:59] client config for ingress-addon-legacy-540731: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.key", CAFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bfbf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 20:13:25.939532  380432 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-540731"
	I1005 20:13:25.939609  380432 host.go:66] Checking if "ingress-addon-legacy-540731" exists ...
	I1005 20:13:25.940227  380432 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-540731 --format={{.State.Status}}
	I1005 20:13:25.973434  380432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/ingress-addon-legacy-540731/id_rsa Username:docker}
	I1005 20:13:25.974474  380432 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 20:13:25.974496  380432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 20:13:25.974548  380432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-540731
	I1005 20:13:25.993442  380432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/ingress-addon-legacy-540731/id_rsa Username:docker}
	I1005 20:13:26.125838  380432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1005 20:13:26.126770  380432 kapi.go:59] client config for ingress-addon-legacy-540731: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.key", CAFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bfbf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 20:13:26.127218  380432 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-540731" to be "Ready" ...
	I1005 20:13:26.328139  380432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:13:26.331391  380432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 20:13:26.749906  380432 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1005 20:13:26.958126  380432 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1005 20:13:26.959811  380432 addons.go:502] enable addons completed in 1.053923245s: enabled=[storage-provisioner default-storageclass]
	I1005 20:13:28.142365  380432 node_ready.go:58] node "ingress-addon-legacy-540731" has status "Ready":"False"
	I1005 20:13:30.142813  380432 node_ready.go:58] node "ingress-addon-legacy-540731" has status "Ready":"False"
	I1005 20:13:31.201527  380432 node_ready.go:49] node "ingress-addon-legacy-540731" has status "Ready":"True"
	I1005 20:13:31.201559  380432 node_ready.go:38] duration metric: took 5.074309686s waiting for node "ingress-addon-legacy-540731" to be "Ready" ...
	I1005 20:13:31.201574  380432 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:13:31.272937  380432 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-c5g58" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:33.425521  380432 pod_ready.go:102] pod "coredns-66bff467f8-c5g58" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-05 20:13:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1005 20:13:35.928349  380432 pod_ready.go:102] pod "coredns-66bff467f8-c5g58" in "kube-system" namespace has status "Ready":"False"
	I1005 20:13:37.929606  380432 pod_ready.go:102] pod "coredns-66bff467f8-c5g58" in "kube-system" namespace has status "Ready":"False"
	I1005 20:13:39.428333  380432 pod_ready.go:92] pod "coredns-66bff467f8-c5g58" in "kube-system" namespace has status "Ready":"True"
	I1005 20:13:39.428365  380432 pod_ready.go:81] duration metric: took 8.15538223s waiting for pod "coredns-66bff467f8-c5g58" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:39.428377  380432 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-540731" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:39.433630  380432 pod_ready.go:92] pod "etcd-ingress-addon-legacy-540731" in "kube-system" namespace has status "Ready":"True"
	I1005 20:13:39.433656  380432 pod_ready.go:81] duration metric: took 5.272515ms waiting for pod "etcd-ingress-addon-legacy-540731" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:39.433670  380432 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-540731" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:39.438818  380432 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-540731" in "kube-system" namespace has status "Ready":"True"
	I1005 20:13:39.438844  380432 pod_ready.go:81] duration metric: took 5.167157ms waiting for pod "kube-apiserver-ingress-addon-legacy-540731" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:39.438861  380432 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-540731" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:39.444562  380432 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-540731" in "kube-system" namespace has status "Ready":"True"
	I1005 20:13:39.444589  380432 pod_ready.go:81] duration metric: took 5.720238ms waiting for pod "kube-controller-manager-ingress-addon-legacy-540731" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:39.444601  380432 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmb8k" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:39.449798  380432 pod_ready.go:92] pod "kube-proxy-tmb8k" in "kube-system" namespace has status "Ready":"True"
	I1005 20:13:39.449822  380432 pod_ready.go:81] duration metric: took 5.214806ms waiting for pod "kube-proxy-tmb8k" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:39.449832  380432 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-540731" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:39.623315  380432 request.go:629] Waited for 173.345552ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-540731
	I1005 20:13:39.823606  380432 request.go:629] Waited for 197.380215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-540731
	I1005 20:13:39.826773  380432 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-540731" in "kube-system" namespace has status "Ready":"True"
	I1005 20:13:39.826804  380432 pod_ready.go:81] duration metric: took 376.963504ms waiting for pod "kube-scheduler-ingress-addon-legacy-540731" in "kube-system" namespace to be "Ready" ...
	I1005 20:13:39.826820  380432 pod_ready.go:38] duration metric: took 8.625209429s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:13:39.826843  380432 api_server.go:52] waiting for apiserver process to appear ...
	I1005 20:13:39.826908  380432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:13:39.839720  380432 api_server.go:72] duration metric: took 13.908717499s to wait for apiserver process to appear ...
	I1005 20:13:39.839749  380432 api_server.go:88] waiting for apiserver healthz status ...
	I1005 20:13:39.839777  380432 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1005 20:13:39.844713  380432 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1005 20:13:39.845612  380432 api_server.go:141] control plane version: v1.18.20
	I1005 20:13:39.845639  380432 api_server.go:131] duration metric: took 5.883226ms to wait for apiserver health ...
	I1005 20:13:39.845648  380432 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 20:13:40.023141  380432 request.go:629] Waited for 177.360577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1005 20:13:40.028787  380432 system_pods.go:59] 8 kube-system pods found
	I1005 20:13:40.028826  380432 system_pods.go:61] "coredns-66bff467f8-c5g58" [fedee5f0-ac59-4e3e-a252-fb1c9292fcd8] Running
	I1005 20:13:40.028832  380432 system_pods.go:61] "etcd-ingress-addon-legacy-540731" [376cb3c8-a0bd-4e9f-a669-a991365dc6ae] Running
	I1005 20:13:40.028836  380432 system_pods.go:61] "kindnet-24vct" [6ad7acc0-8241-43ac-bae1-51110ded8c3d] Running
	I1005 20:13:40.028841  380432 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-540731" [720e04fd-cd53-4691-bb4a-c4891fffac66] Running
	I1005 20:13:40.028845  380432 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-540731" [771e06a5-b8ee-474e-8a0c-69b7fe8a8652] Running
	I1005 20:13:40.028849  380432 system_pods.go:61] "kube-proxy-tmb8k" [592b6bfa-ab7e-4348-afe7-55948f626862] Running
	I1005 20:13:40.028857  380432 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-540731" [e2865376-8692-4ecd-8b0f-7935968de3f7] Running
	I1005 20:13:40.028861  380432 system_pods.go:61] "storage-provisioner" [7d0df9b0-8c52-48c1-92e3-de99cbf2e109] Running
	I1005 20:13:40.028874  380432 system_pods.go:74] duration metric: took 183.214484ms to wait for pod list to return data ...
	I1005 20:13:40.028886  380432 default_sa.go:34] waiting for default service account to be created ...
	I1005 20:13:40.223403  380432 request.go:629] Waited for 194.399724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1005 20:13:40.226074  380432 default_sa.go:45] found service account: "default"
	I1005 20:13:40.226105  380432 default_sa.go:55] duration metric: took 197.211734ms for default service account to be created ...
	I1005 20:13:40.226115  380432 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 20:13:40.423569  380432 request.go:629] Waited for 197.366898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1005 20:13:40.429294  380432 system_pods.go:86] 8 kube-system pods found
	I1005 20:13:40.429336  380432 system_pods.go:89] "coredns-66bff467f8-c5g58" [fedee5f0-ac59-4e3e-a252-fb1c9292fcd8] Running
	I1005 20:13:40.429343  380432 system_pods.go:89] "etcd-ingress-addon-legacy-540731" [376cb3c8-a0bd-4e9f-a669-a991365dc6ae] Running
	I1005 20:13:40.429349  380432 system_pods.go:89] "kindnet-24vct" [6ad7acc0-8241-43ac-bae1-51110ded8c3d] Running
	I1005 20:13:40.429353  380432 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-540731" [720e04fd-cd53-4691-bb4a-c4891fffac66] Running
	I1005 20:13:40.429358  380432 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-540731" [771e06a5-b8ee-474e-8a0c-69b7fe8a8652] Running
	I1005 20:13:40.429362  380432 system_pods.go:89] "kube-proxy-tmb8k" [592b6bfa-ab7e-4348-afe7-55948f626862] Running
	I1005 20:13:40.429367  380432 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-540731" [e2865376-8692-4ecd-8b0f-7935968de3f7] Running
	I1005 20:13:40.429371  380432 system_pods.go:89] "storage-provisioner" [7d0df9b0-8c52-48c1-92e3-de99cbf2e109] Running
	I1005 20:13:40.429384  380432 system_pods.go:126] duration metric: took 203.258547ms to wait for k8s-apps to be running ...
	I1005 20:13:40.429394  380432 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 20:13:40.429450  380432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:13:40.442204  380432 system_svc.go:56] duration metric: took 12.794838ms WaitForService to wait for kubelet.
	I1005 20:13:40.442242  380432 kubeadm.go:581] duration metric: took 14.511250985s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 20:13:40.442265  380432 node_conditions.go:102] verifying NodePressure condition ...
	I1005 20:13:40.622678  380432 request.go:629] Waited for 180.324778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1005 20:13:40.625817  380432 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1005 20:13:40.625851  380432 node_conditions.go:123] node cpu capacity is 8
	I1005 20:13:40.625865  380432 node_conditions.go:105] duration metric: took 183.594841ms to run NodePressure ...
	I1005 20:13:40.625877  380432 start.go:228] waiting for startup goroutines ...
	I1005 20:13:40.625883  380432 start.go:233] waiting for cluster config update ...
	I1005 20:13:40.625894  380432 start.go:242] writing updated cluster config ...
	I1005 20:13:40.626182  380432 ssh_runner.go:195] Run: rm -f paused
	I1005 20:13:40.677791  380432 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I1005 20:13:40.679960  380432 out.go:177] 
	W1005 20:13:40.681710  380432 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1005 20:13:40.683347  380432 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1005 20:13:40.685117  380432 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-540731" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 05 20:16:31 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:31.500893515Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-79g52/hello-world-app" id=5b4331cb-a0a0-4b6b-9cac-6b3edf9a30b7 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Oct 05 20:16:31 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:31.501062964Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 05 20:16:31 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:31.612703848Z" level=info msg="Created container fc81f40589bb9a1e8bfc7e28e25df6de73ccd682b8cbbc6eefb8aecd92bdebf8: default/hello-world-app-5f5d8b66bb-79g52/hello-world-app" id=5b4331cb-a0a0-4b6b-9cac-6b3edf9a30b7 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Oct 05 20:16:31 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:31.613384107Z" level=info msg="Starting container: fc81f40589bb9a1e8bfc7e28e25df6de73ccd682b8cbbc6eefb8aecd92bdebf8" id=a6ccc814-d1b0-4068-bade-85f84be28112 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Oct 05 20:16:31 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:31.623771065Z" level=info msg="Started container" PID=4883 containerID=fc81f40589bb9a1e8bfc7e28e25df6de73ccd682b8cbbc6eefb8aecd92bdebf8 description=default/hello-world-app-5f5d8b66bb-79g52/hello-world-app id=a6ccc814-d1b0-4068-bade-85f84be28112 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=e2a429441d3fc2309a15340fd7e6f276c3c2cc84de1c1e41a359ac13a46a6fc5
	Oct 05 20:16:34 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:34.764881398Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=f3d78b54-28b0-4fb6-a4fd-87e472b27366 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 05 20:16:46 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:46.764471347Z" level=info msg="Stopping pod sandbox: bd0185903b7c6cdb5a72edf01f705fcf5ea1ad11738e2f0a65689dba88d55e2a" id=a81aa1e4-f788-458f-b5d7-5c4950bb7495 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 05 20:16:46 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:46.765561293Z" level=info msg="Stopped pod sandbox: bd0185903b7c6cdb5a72edf01f705fcf5ea1ad11738e2f0a65689dba88d55e2a" id=a81aa1e4-f788-458f-b5d7-5c4950bb7495 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 05 20:16:48 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:48.049965399Z" level=info msg="Stopping container: b894c351d62e702ce4ceff0cdebf15639cbf4f5f56ed951bb2a42f0bf6b15cf5 (timeout: 2s)" id=d27cd80b-a292-4349-84fa-bd36ba0cf279 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 05 20:16:48 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:48.052675605Z" level=info msg="Stopping container: b894c351d62e702ce4ceff0cdebf15639cbf4f5f56ed951bb2a42f0bf6b15cf5 (timeout: 2s)" id=0fb7f4ef-7d85-4a3e-910c-937646e59c40 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.060820969Z" level=warning msg="Stopping container b894c351d62e702ce4ceff0cdebf15639cbf4f5f56ed951bb2a42f0bf6b15cf5 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=d27cd80b-a292-4349-84fa-bd36ba0cf279 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 05 20:16:50 ingress-addon-legacy-540731 conmon[3417]: conmon b894c351d62e702ce4ce <ninfo>: container 3429 exited with status 137
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.232743873Z" level=info msg="Stopped container b894c351d62e702ce4ceff0cdebf15639cbf4f5f56ed951bb2a42f0bf6b15cf5: ingress-nginx/ingress-nginx-controller-7fcf777cb7-g4r96/controller" id=d27cd80b-a292-4349-84fa-bd36ba0cf279 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.232840101Z" level=info msg="Stopped container b894c351d62e702ce4ceff0cdebf15639cbf4f5f56ed951bb2a42f0bf6b15cf5: ingress-nginx/ingress-nginx-controller-7fcf777cb7-g4r96/controller" id=0fb7f4ef-7d85-4a3e-910c-937646e59c40 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.233453063Z" level=info msg="Stopping pod sandbox: b79a96ef245036c002dd772764332af5d5b24f8c014c87221e22038ad014f5d2" id=e676a687-cb69-408f-9b20-691ad41e499a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.233487928Z" level=info msg="Stopping pod sandbox: b79a96ef245036c002dd772764332af5d5b24f8c014c87221e22038ad014f5d2" id=8599d67c-471a-4b9c-bb9f-9b3889c5c802 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.236851441Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-L67UVAXNAVNEGH3L - [0:0]\n:KUBE-HP-YIXRZ5NRQTZ7MZBK - [0:0]\n-X KUBE-HP-YIXRZ5NRQTZ7MZBK\n-X KUBE-HP-L67UVAXNAVNEGH3L\nCOMMIT\n"
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.238361001Z" level=info msg="Closing host port tcp:80"
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.238413362Z" level=info msg="Closing host port tcp:443"
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.239949517Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.239979171Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.240180522Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-g4r96 Namespace:ingress-nginx ID:b79a96ef245036c002dd772764332af5d5b24f8c014c87221e22038ad014f5d2 UID:5e39123f-ce24-4497-bdce-1c16a4ea90a9 NetNS:/var/run/netns/abaaf03e-4e35-446c-a661-87439aa69ba7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.240357928Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-g4r96 from CNI network \"kindnet\" (type=ptp)"
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.292904078Z" level=info msg="Stopped pod sandbox: b79a96ef245036c002dd772764332af5d5b24f8c014c87221e22038ad014f5d2" id=e676a687-cb69-408f-9b20-691ad41e499a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 05 20:16:50 ingress-addon-legacy-540731 crio[958]: time="2023-10-05 20:16:50.293081963Z" level=info msg="Stopped pod sandbox (already stopped): b79a96ef245036c002dd772764332af5d5b24f8c014c87221e22038ad014f5d2" id=8599d67c-471a-4b9c-bb9f-9b3889c5c802 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fc81f40589bb9       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6            24 seconds ago      Running             hello-world-app           0                   e2a429441d3fc       hello-world-app-5f5d8b66bb-79g52
	66a0bfaef82c3       docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14                    2 minutes ago       Running             nginx                     0                   ef7be0988ecc6       nginx
	b894c351d62e7       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   b79a96ef24503       ingress-nginx-controller-7fcf777cb7-g4r96
	fccfd9a945dab       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   e4fe05d80a740       ingress-nginx-admission-patch-95qwh
	4791ff1e22710       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   e5f0950563c04       ingress-nginx-admission-create-kxr76
	a116f079c9e9f       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   069091cd7bb6a       coredns-66bff467f8-c5g58
	8c80684e5d5a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   9e4202bd2f51f       storage-provisioner
	c79a1004e8fd3       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   3ab101443afd3       kindnet-24vct
	46c3e3de50144       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   357ece95b5b9c       kube-proxy-tmb8k
	37d1294527ca3       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   4044b83d5314f       kube-apiserver-ingress-addon-legacy-540731
	e9d162e36dfff       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   64c7026d2a166       etcd-ingress-addon-legacy-540731
	1b81d8ee17701       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   94646a5b8b1f9       kube-scheduler-ingress-addon-legacy-540731
	966563cdabb35       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   a0dc32c9ed034       kube-controller-manager-ingress-addon-legacy-540731
	
	* 
	* ==> coredns [a116f079c9e9fcdda7e267b96acb7a6c56798b0c59fc98b9d3ec6be1ec6861b2] <==
	* [INFO] 10.244.0.5:52133 - 5050 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005908666s
	[INFO] 10.244.0.5:51564 - 10302 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00683969s
	[INFO] 10.244.0.5:49384 - 3664 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007113675s
	[INFO] 10.244.0.5:40861 - 60357 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006731061s
	[INFO] 10.244.0.5:37714 - 11244 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007221176s
	[INFO] 10.244.0.5:59904 - 62584 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007099947s
	[INFO] 10.244.0.5:56978 - 12941 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007183131s
	[INFO] 10.244.0.5:52133 - 38690 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00652629s
	[INFO] 10.244.0.5:34510 - 33163 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007246991s
	[INFO] 10.244.0.5:52133 - 61376 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005870749s
	[INFO] 10.244.0.5:49384 - 42134 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006162735s
	[INFO] 10.244.0.5:56978 - 60054 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006124277s
	[INFO] 10.244.0.5:40861 - 8031 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006347956s
	[INFO] 10.244.0.5:34510 - 52520 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005605848s
	[INFO] 10.244.0.5:59904 - 28731 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006245229s
	[INFO] 10.244.0.5:51564 - 9698 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00651305s
	[INFO] 10.244.0.5:37714 - 50643 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006176608s
	[INFO] 10.244.0.5:52133 - 8184 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000140608s
	[INFO] 10.244.0.5:49384 - 49473 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000121413s
	[INFO] 10.244.0.5:56978 - 30902 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064833s
	[INFO] 10.244.0.5:34510 - 56449 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050987s
	[INFO] 10.244.0.5:40861 - 31180 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000124402s
	[INFO] 10.244.0.5:37714 - 2184 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000112975s
	[INFO] 10.244.0.5:51564 - 25024 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000137799s
	[INFO] 10.244.0.5:59904 - 32909 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000225512s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-540731
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-540731
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=ingress-addon-legacy-540731
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T20_13_11_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 20:13:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-540731
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 20:16:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 20:16:41 +0000   Thu, 05 Oct 2023 20:13:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 20:16:41 +0000   Thu, 05 Oct 2023 20:13:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 20:16:41 +0000   Thu, 05 Oct 2023 20:13:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 20:16:41 +0000   Thu, 05 Oct 2023 20:13:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-540731
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 be930b4f216840739f38ac54650a6fde
	  System UUID:                71f367f4-adbb-4849-ba56-bfe55091699a
	  Boot ID:                    442b7abc-f6f6-4fc0-9fdb-d53241b6517a
	  Kernel Version:             5.15.0-1044-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-79g52                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-c5g58                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m30s
	  kube-system                 etcd-ingress-addon-legacy-540731                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 kindnet-24vct                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m30s
	  kube-system                 kube-apiserver-ingress-addon-legacy-540731             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-540731    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 kube-proxy-tmb8k                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kube-scheduler-ingress-addon-legacy-540731             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m45s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m45s  kubelet     Node ingress-addon-legacy-540731 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m45s  kubelet     Node ingress-addon-legacy-540731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s  kubelet     Node ingress-addon-legacy-540731 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m29s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m25s  kubelet     Node ingress-addon-legacy-540731 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007365] FS-Cache: O-key=[8] 'b2a20f0200000000'
	[  +0.005044] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=0000000012ce25ef{9p.inode} n=00000000233f08a6
	[  +0.008756] FS-Cache: N-key=[8] 'b2a20f0200000000'
	[  +3.104090] FS-Cache: Duplicate cookie detected
	[  +0.004716] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006789] FS-Cache: O-cookie d=0000000046be6370{9P.session} n=00000000d6c24489
	[  +0.007543] FS-Cache: O-key=[10] '34323936363032393237'
	[  +0.005381] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006580] FS-Cache: N-cookie d=0000000046be6370{9P.session} n=000000001982c203
	[  +0.008913] FS-Cache: N-key=[10] '34323936363032393237'
	[Oct 5 20:14] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[  +1.030902] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[  +2.015780] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[  +4.063616] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[  +8.191210] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[ +16.126422] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[Oct 5 20:15] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	
	* 
	* ==> etcd [e9d162e36dfff75b377188b1ede5a0e1370161533eaeb4201f137cf54beb8216] <==
	* 2023-10-05 20:13:03.544606 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-05 20:13:03.545742 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-05 20:13:03.545878 I | embed: listening for peers on 192.168.49.2:2380
	2023-10-05 20:13:03.545944 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/05 20:13:03 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/05 20:13:03 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/05 20:13:03 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/05 20:13:03 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/05 20:13:03 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-05 20:13:03.637212 I | etcdserver: published {Name:ingress-addon-legacy-540731 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-05 20:13:03.637244 I | embed: ready to serve client requests
	2023-10-05 20:13:03.637652 I | embed: ready to serve client requests
	2023-10-05 20:13:03.637833 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-05 20:13:03.638572 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-05 20:13:03.638799 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-05 20:13:03.640559 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-05 20:13:03.641429 I | embed: serving client requests on 192.168.49.2:2379
	2023-10-05 20:13:07.738250 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:0 size:4" took too long (106.651057ms) to execute
	2023-10-05 20:13:07.738461 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-ingress-addon-legacy-540731\" " with result "range_response_count:0 size:4" took too long (107.690135ms) to execute
	2023-10-05 20:13:30.971091 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-ingress-addon-legacy-540731\" " with result "range_response_count:1 size:4788" took too long (196.9407ms) to execute
	2023-10-05 20:13:30.972109 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (159.978273ms) to execute
	2023-10-05 20:13:31.199809 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-66bff467f8-c5g58.178b4f3c75825ef3\" " with result "range_response_count:1 size:829" took too long (222.28739ms) to execute
	2023-10-05 20:13:31.199836 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-ingress-addon-legacy-540731\" " with result "range_response_count:1 size:6680" took too long (219.914616ms) to execute
	2023-10-05 20:13:31.199854 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2792" took too long (127.202767ms) to execute
	2023-10-05 20:13:31.356674 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/storage-provisioner.178b4f3cbe678ef4\" " with result "range_response_count:1 size:814" took too long (152.575848ms) to execute
	
	* 
	* ==> kernel <==
	*  20:16:55 up  1:59,  0 users,  load average: 0.14, 0.46, 0.61
	Linux ingress-addon-legacy-540731 5.15.0-1044-gcp #52~20.04.1-Ubuntu SMP Wed Sep 20 16:25:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [c79a1004e8fd37e38771c6edd3e9fefebe8fb68d2eb5817dd386d2f812f78f5e] <==
	* I1005 20:14:49.886600       1 main.go:227] handling current node
	I1005 20:14:59.898957       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:14:59.898984       1 main.go:227] handling current node
	I1005 20:15:09.903332       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:15:09.903365       1 main.go:227] handling current node
	I1005 20:15:19.906804       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:15:19.906835       1 main.go:227] handling current node
	I1005 20:15:29.911143       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:15:29.911172       1 main.go:227] handling current node
	I1005 20:15:39.922396       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:15:39.922426       1 main.go:227] handling current node
	I1005 20:15:49.934536       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:15:49.934564       1 main.go:227] handling current node
	I1005 20:15:59.938592       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:15:59.938621       1 main.go:227] handling current node
	I1005 20:16:09.950323       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:16:09.950352       1 main.go:227] handling current node
	I1005 20:16:19.962521       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:16:19.962552       1 main.go:227] handling current node
	I1005 20:16:29.966906       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:16:29.967033       1 main.go:227] handling current node
	I1005 20:16:39.970683       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:16:39.970713       1 main.go:227] handling current node
	I1005 20:16:49.982546       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 20:16:49.982572       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [37d1294527ca369f463f62a3feeb099019512ae3b0c582e0e620e36061048215] <==
	* I1005 20:13:07.521248       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1005 20:13:07.522044       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1005 20:13:07.621831       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1005 20:13:07.621881       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1005 20:13:07.621897       1 cache.go:39] Caches are synced for autoregister controller
	I1005 20:13:07.626969       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1005 20:13:07.627433       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1005 20:13:08.519176       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1005 20:13:08.519216       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1005 20:13:08.526569       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1005 20:13:08.529734       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1005 20:13:08.529757       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1005 20:13:08.885608       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1005 20:13:08.929815       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1005 20:13:09.051689       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1005 20:13:09.052835       1 controller.go:609] quota admission added evaluator for: endpoints
	I1005 20:13:09.056727       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1005 20:13:09.429421       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1005 20:13:09.798361       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1005 20:13:10.369405       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1005 20:13:10.522954       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1005 20:13:25.550997       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1005 20:13:25.623986       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1005 20:13:41.432591       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1005 20:14:07.917475       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [966563cdabb35a62e2e0142a6361599e7a4671c33ef9b22b58b2243f1a464938] <==
	* I1005 20:13:25.630385       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"cf442379-6860-4050-9338-59751435eec6", APIVersion:"apps/v1", ResourceVersion:"213", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-tmb8k
	I1005 20:13:25.630413       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"fa46edb7-c6bc-4afa-9d66-1dab18401302", APIVersion:"apps/v1", ResourceVersion:"236", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-24vct
	I1005 20:13:25.635312       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"90fabfa5-36b5-40d6-adf4-17df49592d11", APIVersion:"apps/v1", ResourceVersion:"330", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-xbbdh
	I1005 20:13:25.642593       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"90fabfa5-36b5-40d6-adf4-17df49592d11", APIVersion:"apps/v1", ResourceVersion:"330", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-c5g58
	I1005 20:13:25.934923       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"4d01a968-4aff-415d-8fc3-c4f0bde91f17", APIVersion:"apps/v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1005 20:13:25.952061       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"90fabfa5-36b5-40d6-adf4-17df49592d11", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-xbbdh
	I1005 20:13:25.965233       1 shared_informer.go:230] Caches are synced for attach detach 
	I1005 20:13:26.033312       1 shared_informer.go:230] Caches are synced for endpoint 
	I1005 20:13:26.065512       1 shared_informer.go:230] Caches are synced for resource quota 
	I1005 20:13:26.098259       1 shared_informer.go:230] Caches are synced for resource quota 
	I1005 20:13:26.120496       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1005 20:13:26.221273       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1005 20:13:26.221430       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1005 20:13:26.419269       1 request.go:621] Throttling request took 1.021918157s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	I1005 20:13:26.920797       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1005 20:13:26.920859       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1005 20:13:35.620827       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1005 20:13:41.394555       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"d739b6b6-94ca-4b8b-b086-7b6676d87b49", APIVersion:"apps/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1005 20:13:41.426910       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"a375e6a8-3e6c-43a2-808d-ae5ab4e49c16", APIVersion:"apps/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-g4r96
	I1005 20:13:41.440299       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1b114f0d-4109-45ea-aa54-6e9269c76c0f", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-kxr76
	I1005 20:13:41.523199       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"381664ce-2830-4711-8f2b-dcdb33efe4c9", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-95qwh
	I1005 20:13:44.892016       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1b114f0d-4109-45ea-aa54-6e9269c76c0f", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1005 20:13:44.899653       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"381664ce-2830-4711-8f2b-dcdb33efe4c9", APIVersion:"batch/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1005 20:16:29.605694       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"7aa816f1-2bed-48f4-adf5-e9407eeb3843", APIVersion:"apps/v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1005 20:16:29.612198       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"19d54b53-0204-4f14-9097-ce242b82342e", APIVersion:"apps/v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-79g52
	
	* 
	* ==> kube-proxy [46c3e3de5014458e79e8b9ff666c9eb5e3286a563c9cb2349ce3d7f181eeb4a3] <==
	* W1005 20:13:26.737636       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1005 20:13:26.750935       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1005 20:13:26.750966       1 server_others.go:186] Using iptables Proxier.
	I1005 20:13:26.751370       1 server.go:583] Version: v1.18.20
	I1005 20:13:26.751953       1 config.go:315] Starting service config controller
	I1005 20:13:26.751972       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1005 20:13:26.752018       1 config.go:133] Starting endpoints config controller
	I1005 20:13:26.752064       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1005 20:13:26.852145       1 shared_informer.go:230] Caches are synced for service config 
	I1005 20:13:26.852217       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [1b81d8ee17701333c0611c85e934a90dddcdb4a4acf008774c62f80aeb736ff8] <==
	* W1005 20:13:07.547393       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1005 20:13:07.547418       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1005 20:13:07.547426       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1005 20:13:07.547431       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1005 20:13:07.639671       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1005 20:13:07.639807       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1005 20:13:07.642245       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1005 20:13:07.642362       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1005 20:13:07.720702       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1005 20:13:07.720907       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1005 20:13:07.732889       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 20:13:07.732889       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 20:13:07.733040       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 20:13:07.733079       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1005 20:13:07.733182       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 20:13:07.733193       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 20:13:07.733287       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 20:13:07.733402       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 20:13:07.733444       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 20:13:07.733523       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 20:13:07.733586       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1005 20:13:07.735301       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 20:13:08.620332       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 20:13:08.711380       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1005 20:13:09.342584       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Oct 05 20:16:06 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:06.765428    1867 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 05 20:16:06 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:06.765464    1867 pod_workers.go:191] Error syncing pod b15e3f26-b968-4b31-8712-b9c0cc68ced8 ("kube-ingress-dns-minikube_kube-system(b15e3f26-b968-4b31-8712-b9c0cc68ced8)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 05 20:16:20 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:20.765523    1867 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 05 20:16:20 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:20.765571    1867 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 05 20:16:20 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:20.765629    1867 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 05 20:16:20 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:20.765664    1867 pod_workers.go:191] Error syncing pod b15e3f26-b968-4b31-8712-b9c0cc68ced8 ("kube-ingress-dns-minikube_kube-system(b15e3f26-b968-4b31-8712-b9c0cc68ced8)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 05 20:16:29 ingress-addon-legacy-540731 kubelet[1867]: I1005 20:16:29.616812    1867 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 05 20:16:29 ingress-addon-legacy-540731 kubelet[1867]: I1005 20:16:29.788737    1867 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-d86vj" (UniqueName: "kubernetes.io/secret/35d8bcb5-d73d-4ead-8510-51c08c9ec02d-default-token-d86vj") pod "hello-world-app-5f5d8b66bb-79g52" (UID: "35d8bcb5-d73d-4ead-8510-51c08c9ec02d")
	Oct 05 20:16:29 ingress-addon-legacy-540731 kubelet[1867]: W1005 20:16:29.976212    1867 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/21228f7bce93d8130112368e459036b5790c0f1e3167f9afb664e2a20921fa60/crio-e2a429441d3fc2309a15340fd7e6f276c3c2cc84de1c1e41a359ac13a46a6fc5 WatchSource:0}: Error finding container e2a429441d3fc2309a15340fd7e6f276c3c2cc84de1c1e41a359ac13a46a6fc5: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0019c20a0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Oct 05 20:16:34 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:34.765297    1867 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 05 20:16:34 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:34.765357    1867 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 05 20:16:34 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:34.765436    1867 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 05 20:16:34 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:34.765470    1867 pod_workers.go:191] Error syncing pod b15e3f26-b968-4b31-8712-b9c0cc68ced8 ("kube-ingress-dns-minikube_kube-system(b15e3f26-b968-4b31-8712-b9c0cc68ced8)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 05 20:16:45 ingress-addon-legacy-540731 kubelet[1867]: I1005 20:16:45.457790    1867 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-r7pmd" (UniqueName: "kubernetes.io/secret/b15e3f26-b968-4b31-8712-b9c0cc68ced8-minikube-ingress-dns-token-r7pmd") pod "b15e3f26-b968-4b31-8712-b9c0cc68ced8" (UID: "b15e3f26-b968-4b31-8712-b9c0cc68ced8")
	Oct 05 20:16:45 ingress-addon-legacy-540731 kubelet[1867]: I1005 20:16:45.460037    1867 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15e3f26-b968-4b31-8712-b9c0cc68ced8-minikube-ingress-dns-token-r7pmd" (OuterVolumeSpecName: "minikube-ingress-dns-token-r7pmd") pod "b15e3f26-b968-4b31-8712-b9c0cc68ced8" (UID: "b15e3f26-b968-4b31-8712-b9c0cc68ced8"). InnerVolumeSpecName "minikube-ingress-dns-token-r7pmd". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 20:16:45 ingress-addon-legacy-540731 kubelet[1867]: I1005 20:16:45.558193    1867 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-r7pmd" (UniqueName: "kubernetes.io/secret/b15e3f26-b968-4b31-8712-b9c0cc68ced8-minikube-ingress-dns-token-r7pmd") on node "ingress-addon-legacy-540731" DevicePath ""
	Oct 05 20:16:48 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:48.051146    1867 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-g4r96.178b4f6b9117e234", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-g4r96", UID:"5e39123f-ce24-4497-bdce-1c16a4ea90a9", APIVersion:"v1", ResourceVersion:"479", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-540731"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13fe44c02f30234, ext:217723861611, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13fe44c02f30234, ext:217723861611, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-g4r96.178b4f6b9117e234" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 05 20:16:48 ingress-addon-legacy-540731 kubelet[1867]: E1005 20:16:48.056313    1867 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-g4r96.178b4f6b9117e234", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-g4r96", UID:"5e39123f-ce24-4497-bdce-1c16a4ea90a9", APIVersion:"v1", ResourceVersion:"479", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-540731"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13fe44c02f30234, ext:217723861611, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13fe44c031ea05c, ext:217726720149, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-g4r96.178b4f6b9117e234" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 05 20:16:51 ingress-addon-legacy-540731 kubelet[1867]: W1005 20:16:51.238133    1867 pod_container_deletor.go:77] Container "b79a96ef245036c002dd772764332af5d5b24f8c014c87221e22038ad014f5d2" not found in pod's containers
	Oct 05 20:16:52 ingress-addon-legacy-540731 kubelet[1867]: I1005 20:16:52.230398    1867 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-gvq7r" (UniqueName: "kubernetes.io/secret/5e39123f-ce24-4497-bdce-1c16a4ea90a9-ingress-nginx-token-gvq7r") pod "5e39123f-ce24-4497-bdce-1c16a4ea90a9" (UID: "5e39123f-ce24-4497-bdce-1c16a4ea90a9")
	Oct 05 20:16:52 ingress-addon-legacy-540731 kubelet[1867]: I1005 20:16:52.230464    1867 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5e39123f-ce24-4497-bdce-1c16a4ea90a9-webhook-cert") pod "5e39123f-ce24-4497-bdce-1c16a4ea90a9" (UID: "5e39123f-ce24-4497-bdce-1c16a4ea90a9")
	Oct 05 20:16:52 ingress-addon-legacy-540731 kubelet[1867]: I1005 20:16:52.232798    1867 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e39123f-ce24-4497-bdce-1c16a4ea90a9-ingress-nginx-token-gvq7r" (OuterVolumeSpecName: "ingress-nginx-token-gvq7r") pod "5e39123f-ce24-4497-bdce-1c16a4ea90a9" (UID: "5e39123f-ce24-4497-bdce-1c16a4ea90a9"). InnerVolumeSpecName "ingress-nginx-token-gvq7r". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 20:16:52 ingress-addon-legacy-540731 kubelet[1867]: I1005 20:16:52.232953    1867 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e39123f-ce24-4497-bdce-1c16a4ea90a9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5e39123f-ce24-4497-bdce-1c16a4ea90a9" (UID: "5e39123f-ce24-4497-bdce-1c16a4ea90a9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 20:16:52 ingress-addon-legacy-540731 kubelet[1867]: I1005 20:16:52.330784    1867 reconciler.go:319] Volume detached for volume "ingress-nginx-token-gvq7r" (UniqueName: "kubernetes.io/secret/5e39123f-ce24-4497-bdce-1c16a4ea90a9-ingress-nginx-token-gvq7r") on node "ingress-addon-legacy-540731" DevicePath ""
	Oct 05 20:16:52 ingress-addon-legacy-540731 kubelet[1867]: I1005 20:16:52.330829    1867 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5e39123f-ce24-4497-bdce-1c16a4ea90a9-webhook-cert") on node "ingress-addon-legacy-540731" DevicePath ""
	
	* 
	* ==> storage-provisioner [8c80684e5d5a301b7fceaf5360e219022160da6c31e51b3229b53d3063e1eba0] <==
	* I1005 20:13:36.035662       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1005 20:13:36.044797       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1005 20:13:36.044853       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1005 20:13:36.053637       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1005 20:13:36.053806       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-540731_87362cc7-49f8-40a0-a92a-888aa739a577!
	I1005 20:13:36.054923       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad5734bc-c1a1-4a76-bda8-579694d0c68c", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-540731_87362cc7-49f8-40a0-a92a-888aa739a577 became leader
	I1005 20:13:36.153921       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-540731_87362cc7-49f8-40a0-a92a-888aa739a577!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-540731 -n ingress-addon-legacy-540731
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-540731 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (185.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-bk8vz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-bk8vz -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-bk8vz -- sh -c "ping -c 1 192.168.58.1": exit status 1 (180.981651ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-bk8vz): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-zj2tk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-zj2tk -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-zj2tk -- sh -c "ping -c 1 192.168.58.1": exit status 1 (184.518633ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-zj2tk): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-401792
helpers_test.go:235: (dbg) docker inspect multinode-401792:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17",
	        "Created": "2023-10-05T20:21:38.584638434Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 427612,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T20:21:38.918742835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17/hosts",
	        "LogPath": "/var/lib/docker/containers/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17-json.log",
	        "Name": "/multinode-401792",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-401792:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-401792",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5530520101dfceefa8edba771df66a1b894f80307ee888d9251f8872dd98b6c0-init/diff:/var/lib/docker/overlay2/a21dd10b1c0943795b4df336c5f708b264590966562c18c6ecb8b8c4ccc3838e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5530520101dfceefa8edba771df66a1b894f80307ee888d9251f8872dd98b6c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5530520101dfceefa8edba771df66a1b894f80307ee888d9251f8872dd98b6c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5530520101dfceefa8edba771df66a1b894f80307ee888d9251f8872dd98b6c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-401792",
	                "Source": "/var/lib/docker/volumes/multinode-401792/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-401792",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-401792",
	                "name.minikube.sigs.k8s.io": "multinode-401792",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "952bef94fe211760d7476c99509c45414c8cea8d30370a5a33dda896a6e8baf3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/952bef94fe21",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-401792": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "21605b8de5b4",
	                        "multinode-401792"
	                    ],
	                    "NetworkID": "ad846d7d92aec8f3c2e6fbce4a8843617a193648ce205daa5f2f347220f40280",
	                    "EndpointID": "cebbf329c01f48957308b9f0d9211f7dd4ae5c8ff20bff5e7fc0d0a7b3aba181",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-401792 -n multinode-401792
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-401792 logs -n 25: (1.487543201s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-638763                           | mount-start-2-638763 | jenkins | v1.31.2 | 05 Oct 23 20:21 UTC | 05 Oct 23 20:21 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-638763 ssh -- ls                    | mount-start-2-638763 | jenkins | v1.31.2 | 05 Oct 23 20:21 UTC | 05 Oct 23 20:21 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-618829                           | mount-start-1-618829 | jenkins | v1.31.2 | 05 Oct 23 20:21 UTC | 05 Oct 23 20:21 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-638763 ssh -- ls                    | mount-start-2-638763 | jenkins | v1.31.2 | 05 Oct 23 20:21 UTC | 05 Oct 23 20:21 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-638763                           | mount-start-2-638763 | jenkins | v1.31.2 | 05 Oct 23 20:21 UTC | 05 Oct 23 20:21 UTC |
	| start   | -p mount-start-2-638763                           | mount-start-2-638763 | jenkins | v1.31.2 | 05 Oct 23 20:21 UTC | 05 Oct 23 20:21 UTC |
	| ssh     | mount-start-2-638763 ssh -- ls                    | mount-start-2-638763 | jenkins | v1.31.2 | 05 Oct 23 20:21 UTC | 05 Oct 23 20:21 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-638763                           | mount-start-2-638763 | jenkins | v1.31.2 | 05 Oct 23 20:21 UTC | 05 Oct 23 20:21 UTC |
	| delete  | -p mount-start-1-618829                           | mount-start-1-618829 | jenkins | v1.31.2 | 05 Oct 23 20:21 UTC | 05 Oct 23 20:21 UTC |
	| start   | -p multinode-401792                               | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:21 UTC | 05 Oct 23 20:23 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- apply -f                   | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- rollout                    | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- get pods -o                | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- get pods -o                | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- exec                       | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | busybox-5bc68d56bd-bk8vz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- exec                       | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | busybox-5bc68d56bd-zj2tk --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- exec                       | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | busybox-5bc68d56bd-bk8vz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- exec                       | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | busybox-5bc68d56bd-zj2tk --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- exec                       | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | busybox-5bc68d56bd-bk8vz -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- exec                       | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | busybox-5bc68d56bd-zj2tk -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- get pods -o                | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- exec                       | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | busybox-5bc68d56bd-bk8vz                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- exec                       | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC |                     |
	|         | busybox-5bc68d56bd-bk8vz -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- exec                       | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC | 05 Oct 23 20:23 UTC |
	|         | busybox-5bc68d56bd-zj2tk                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-401792 -- exec                       | multinode-401792     | jenkins | v1.31.2 | 05 Oct 23 20:23 UTC |                     |
	|         | busybox-5bc68d56bd-zj2tk -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:21:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:21:32.289374  427001 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:21:32.289672  427001 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:21:32.289684  427001 out.go:309] Setting ErrFile to fd 2...
	I1005 20:21:32.289688  427001 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:21:32.289898  427001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	I1005 20:21:32.290597  427001 out.go:303] Setting JSON to false
	I1005 20:21:32.292272  427001 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7421,"bootTime":1696529871,"procs":903,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:21:32.292368  427001 start.go:138] virtualization: kvm guest
	I1005 20:21:32.294904  427001 out.go:177] * [multinode-401792] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:21:32.296722  427001 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:21:32.296726  427001 notify.go:220] Checking for updates...
	I1005 20:21:32.298365  427001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:21:32.299709  427001 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:21:32.301133  427001 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	I1005 20:21:32.302381  427001 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:21:32.303664  427001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:21:32.305221  427001 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:21:32.328414  427001 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:21:32.328522  427001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:21:32.385368  427001 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-05 20:21:32.376175652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:21:32.385487  427001 docker.go:294] overlay module found
	I1005 20:21:32.388293  427001 out.go:177] * Using the docker driver based on user configuration
	I1005 20:21:32.389541  427001 start.go:298] selected driver: docker
	I1005 20:21:32.389560  427001 start.go:902] validating driver "docker" against <nil>
	I1005 20:21:32.389574  427001 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:21:32.390472  427001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:21:32.446209  427001 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-05 20:21:32.437448727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:21:32.446389  427001 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 20:21:32.446590  427001 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1005 20:21:32.448405  427001 out.go:177] * Using Docker driver with root privileges
	I1005 20:21:32.449834  427001 cni.go:84] Creating CNI manager for ""
	I1005 20:21:32.449876  427001 cni.go:136] 0 nodes found, recommending kindnet
	I1005 20:21:32.449887  427001 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 20:21:32.449903  427001 start_flags.go:321] config:
	{Name:multinode-401792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-401792 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:21:32.451728  427001 out.go:177] * Starting control plane node multinode-401792 in cluster multinode-401792
	I1005 20:21:32.453324  427001 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 20:21:32.454901  427001 out.go:177] * Pulling base image ...
	I1005 20:21:32.456276  427001 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 20:21:32.456328  427001 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:21:32.456337  427001 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1005 20:21:32.456451  427001 cache.go:57] Caching tarball of preloaded images
	I1005 20:21:32.456573  427001 preload.go:174] Found /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1005 20:21:32.456595  427001 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1005 20:21:32.456992  427001 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/config.json ...
	I1005 20:21:32.457022  427001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/config.json: {Name:mk3ac787b1ca82bffabf432801bb4d1378dcce20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:21:32.474363  427001 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 20:21:32.474388  427001 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 20:21:32.474401  427001 cache.go:195] Successfully downloaded all kic artifacts
	I1005 20:21:32.474448  427001 start.go:365] acquiring machines lock for multinode-401792: {Name:mkc59b81f3881237c7b3ed7f3af5bbd7605fd162 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:21:32.474577  427001 start.go:369] acquired machines lock for "multinode-401792" in 97.8µs
	I1005 20:21:32.474607  427001 start.go:93] Provisioning new machine with config: &{Name:multinode-401792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-401792 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 20:21:32.474694  427001 start.go:125] createHost starting for "" (driver="docker")
	I1005 20:21:32.476599  427001 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1005 20:21:32.476866  427001 start.go:159] libmachine.API.Create for "multinode-401792" (driver="docker")
	I1005 20:21:32.476930  427001 client.go:168] LocalClient.Create starting
	I1005 20:21:32.477019  427001 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem
	I1005 20:21:32.477067  427001 main.go:141] libmachine: Decoding PEM data...
	I1005 20:21:32.477092  427001 main.go:141] libmachine: Parsing certificate...
	I1005 20:21:32.477166  427001 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem
	I1005 20:21:32.477205  427001 main.go:141] libmachine: Decoding PEM data...
	I1005 20:21:32.477222  427001 main.go:141] libmachine: Parsing certificate...
	I1005 20:21:32.477597  427001 cli_runner.go:164] Run: docker network inspect multinode-401792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 20:21:32.495600  427001 cli_runner.go:211] docker network inspect multinode-401792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 20:21:32.495705  427001 network_create.go:281] running [docker network inspect multinode-401792] to gather additional debugging logs...
	I1005 20:21:32.495734  427001 cli_runner.go:164] Run: docker network inspect multinode-401792
	W1005 20:21:32.513294  427001 cli_runner.go:211] docker network inspect multinode-401792 returned with exit code 1
	I1005 20:21:32.513335  427001 network_create.go:284] error running [docker network inspect multinode-401792]: docker network inspect multinode-401792: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-401792 not found
	I1005 20:21:32.513376  427001 network_create.go:286] output of [docker network inspect multinode-401792]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-401792 not found
	
	** /stderr **
	I1005 20:21:32.513523  427001 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:21:32.531852  427001 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed3507f6d890 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f0:27:80:ac} reservation:<nil>}
	I1005 20:21:32.532566  427001 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010b2b70}
	I1005 20:21:32.532603  427001 network_create.go:124] attempt to create docker network multinode-401792 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1005 20:21:32.532657  427001 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-401792 multinode-401792
	I1005 20:21:32.590859  427001 network_create.go:108] docker network multinode-401792 192.168.58.0/24 created
	I1005 20:21:32.590895  427001 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-401792" container
	I1005 20:21:32.590962  427001 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 20:21:32.608714  427001 cli_runner.go:164] Run: docker volume create multinode-401792 --label name.minikube.sigs.k8s.io=multinode-401792 --label created_by.minikube.sigs.k8s.io=true
	I1005 20:21:32.627856  427001 oci.go:103] Successfully created a docker volume multinode-401792
	I1005 20:21:32.627962  427001 cli_runner.go:164] Run: docker run --rm --name multinode-401792-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-401792 --entrypoint /usr/bin/test -v multinode-401792:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 20:21:33.147123  427001 oci.go:107] Successfully prepared a docker volume multinode-401792
	I1005 20:21:33.147176  427001 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 20:21:33.147203  427001 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 20:21:33.147285  427001 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-401792:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 20:21:38.510981  427001 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-401792:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (5.363637451s)
	I1005 20:21:38.511026  427001 kic.go:199] duration metric: took 5.363818 seconds to extract preloaded images to volume
	W1005 20:21:38.511206  427001 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 20:21:38.511300  427001 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 20:21:38.568099  427001 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-401792 --name multinode-401792 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-401792 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-401792 --network multinode-401792 --ip 192.168.58.2 --volume multinode-401792:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 20:21:38.926537  427001 cli_runner.go:164] Run: docker container inspect multinode-401792 --format={{.State.Running}}
	I1005 20:21:38.944992  427001 cli_runner.go:164] Run: docker container inspect multinode-401792 --format={{.State.Status}}
	I1005 20:21:38.964929  427001 cli_runner.go:164] Run: docker exec multinode-401792 stat /var/lib/dpkg/alternatives/iptables
	I1005 20:21:39.007673  427001 oci.go:144] the created container "multinode-401792" has a running status.
	I1005 20:21:39.007714  427001 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa...
	I1005 20:21:39.170631  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1005 20:21:39.170699  427001 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 20:21:39.193086  427001 cli_runner.go:164] Run: docker container inspect multinode-401792 --format={{.State.Status}}
	I1005 20:21:39.215283  427001 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 20:21:39.215309  427001 kic_runner.go:114] Args: [docker exec --privileged multinode-401792 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 20:21:39.291149  427001 cli_runner.go:164] Run: docker container inspect multinode-401792 --format={{.State.Status}}
	I1005 20:21:39.310196  427001 machine.go:88] provisioning docker machine ...
	I1005 20:21:39.310248  427001 ubuntu.go:169] provisioning hostname "multinode-401792"
	I1005 20:21:39.310319  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:21:39.331262  427001 main.go:141] libmachine: Using SSH client type: native
	I1005 20:21:39.331792  427001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1005 20:21:39.331881  427001 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-401792 && echo "multinode-401792" | sudo tee /etc/hostname
	I1005 20:21:39.332686  427001 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34396->127.0.0.1:33149: read: connection reset by peer
	I1005 20:21:42.482962  427001 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-401792
	
	I1005 20:21:42.483088  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:21:42.501488  427001 main.go:141] libmachine: Using SSH client type: native
	I1005 20:21:42.501979  427001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1005 20:21:42.502011  427001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-401792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-401792/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-401792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 20:21:42.639822  427001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:21:42.639869  427001 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-334135/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-334135/.minikube}
	I1005 20:21:42.639900  427001 ubuntu.go:177] setting up certificates
	I1005 20:21:42.639911  427001 provision.go:83] configureAuth start
	I1005 20:21:42.639970  427001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-401792
	I1005 20:21:42.657973  427001 provision.go:138] copyHostCerts
	I1005 20:21:42.658017  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem
	I1005 20:21:42.658050  427001 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem, removing ...
	I1005 20:21:42.658062  427001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem
	I1005 20:21:42.658149  427001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem (1078 bytes)
	I1005 20:21:42.658245  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem
	I1005 20:21:42.658273  427001 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem, removing ...
	I1005 20:21:42.658283  427001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem
	I1005 20:21:42.658325  427001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem (1123 bytes)
	I1005 20:21:42.658388  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem
	I1005 20:21:42.658421  427001 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem, removing ...
	I1005 20:21:42.658431  427001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem
	I1005 20:21:42.658466  427001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem (1675 bytes)
	I1005 20:21:42.658533  427001 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem org=jenkins.multinode-401792 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-401792]
	I1005 20:21:42.856658  427001 provision.go:172] copyRemoteCerts
	I1005 20:21:42.856725  427001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 20:21:42.856776  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:21:42.874473  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa Username:docker}
	I1005 20:21:42.972173  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1005 20:21:42.972240  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1005 20:21:42.997485  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1005 20:21:42.997554  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 20:21:43.021708  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1005 20:21:43.021773  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1005 20:21:43.045862  427001 provision.go:86] duration metric: configureAuth took 405.936984ms
	I1005 20:21:43.045898  427001 ubuntu.go:193] setting minikube options for container-runtime
	I1005 20:21:43.046114  427001 config.go:182] Loaded profile config "multinode-401792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:21:43.046224  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:21:43.064210  427001 main.go:141] libmachine: Using SSH client type: native
	I1005 20:21:43.064558  427001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1005 20:21:43.064576  427001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 20:21:43.294068  427001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 20:21:43.294107  427001 machine.go:91] provisioned docker machine in 3.983878128s
	I1005 20:21:43.294123  427001 client.go:171] LocalClient.Create took 10.817179937s
	I1005 20:21:43.294154  427001 start.go:167] duration metric: libmachine.API.Create for "multinode-401792" took 10.817287675s
	I1005 20:21:43.294169  427001 start.go:300] post-start starting for "multinode-401792" (driver="docker")
	I1005 20:21:43.294188  427001 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 20:21:43.294288  427001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 20:21:43.294344  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:21:43.312843  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa Username:docker}
	I1005 20:21:43.408789  427001 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 20:21:43.412383  427001 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1005 20:21:43.412417  427001 command_runner.go:130] > NAME="Ubuntu"
	I1005 20:21:43.412430  427001 command_runner.go:130] > VERSION_ID="22.04"
	I1005 20:21:43.412436  427001 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1005 20:21:43.412442  427001 command_runner.go:130] > VERSION_CODENAME=jammy
	I1005 20:21:43.412445  427001 command_runner.go:130] > ID=ubuntu
	I1005 20:21:43.412449  427001 command_runner.go:130] > ID_LIKE=debian
	I1005 20:21:43.412456  427001 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1005 20:21:43.412464  427001 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1005 20:21:43.412480  427001 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1005 20:21:43.412497  427001 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1005 20:21:43.412505  427001 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1005 20:21:43.412561  427001 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 20:21:43.412584  427001 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 20:21:43.412593  427001 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 20:21:43.412602  427001 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 20:21:43.412617  427001 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/addons for local assets ...
	I1005 20:21:43.412684  427001 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/files for local assets ...
	I1005 20:21:43.412752  427001 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem -> 3409292.pem in /etc/ssl/certs
	I1005 20:21:43.412763  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem -> /etc/ssl/certs/3409292.pem
	I1005 20:21:43.412843  427001 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 20:21:43.422018  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem --> /etc/ssl/certs/3409292.pem (1708 bytes)
	I1005 20:21:43.446343  427001 start.go:303] post-start completed in 152.148497ms
	I1005 20:21:43.446708  427001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-401792
	I1005 20:21:43.464357  427001 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/config.json ...
	I1005 20:21:43.464683  427001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:21:43.464739  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:21:43.483208  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa Username:docker}
	I1005 20:21:43.576042  427001 command_runner.go:130] > 19%!
	(MISSING)I1005 20:21:43.576185  427001 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 20:21:43.580923  427001 command_runner.go:130] > 238G
	I1005 20:21:43.581061  427001 start.go:128] duration metric: createHost completed in 11.106329796s
	I1005 20:21:43.581085  427001 start.go:83] releasing machines lock for "multinode-401792", held for 11.106494945s
	I1005 20:21:43.581152  427001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-401792
	I1005 20:21:43.599004  427001 ssh_runner.go:195] Run: cat /version.json
	I1005 20:21:43.599103  427001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 20:21:43.599172  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:21:43.599107  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:21:43.618295  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa Username:docker}
	I1005 20:21:43.619597  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa Username:docker}
	I1005 20:21:43.798473  427001 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1005 20:21:43.800860  427001 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1696360059-17345", "minikube_version": "v1.31.2", "commit": "3da829742e24bcb762d99c062a7806436d0f28e3"}
	I1005 20:21:43.801039  427001 ssh_runner.go:195] Run: systemctl --version
	I1005 20:21:43.805497  427001 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1005 20:21:43.805542  427001 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1005 20:21:43.805649  427001 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 20:21:43.945409  427001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 20:21:43.949872  427001 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1005 20:21:43.949900  427001 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1005 20:21:43.949907  427001 command_runner.go:130] > Device: 35h/53d	Inode: 1299532     Links: 1
	I1005 20:21:43.949913  427001 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 20:21:43.949923  427001 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1005 20:21:43.949928  427001 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1005 20:21:43.949933  427001 command_runner.go:130] > Change: 2023-10-05 20:03:12.987873402 +0000
	I1005 20:21:43.949938  427001 command_runner.go:130] >  Birth: 2023-10-05 20:03:12.987873402 +0000
	I1005 20:21:43.950004  427001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:21:43.970069  427001 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 20:21:43.970159  427001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:21:44.000507  427001 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1005 20:21:44.000589  427001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1005 20:21:44.000601  427001 start.go:469] detecting cgroup driver to use...
	I1005 20:21:44.000636  427001 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:21:44.000685  427001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 20:21:44.016586  427001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 20:21:44.028123  427001 docker.go:197] disabling cri-docker service (if available) ...
	I1005 20:21:44.028177  427001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 20:21:44.042458  427001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 20:21:44.056673  427001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 20:21:44.142740  427001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 20:21:44.230215  427001 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1005 20:21:44.230254  427001 docker.go:213] disabling docker service ...
	I1005 20:21:44.230303  427001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 20:21:44.249790  427001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 20:21:44.261674  427001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 20:21:44.273549  427001 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1005 20:21:44.338574  427001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 20:21:44.350237  427001 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1005 20:21:44.423672  427001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 20:21:44.434993  427001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:21:44.451161  427001 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1005 20:21:44.452065  427001 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1005 20:21:44.452121  427001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:21:44.462158  427001 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1005 20:21:44.462229  427001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:21:44.472590  427001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:21:44.482818  427001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:21:44.493716  427001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 20:21:44.504110  427001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 20:21:44.512300  427001 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1005 20:21:44.513030  427001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 20:21:44.522005  427001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:21:44.597939  427001 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1005 20:21:44.711831  427001 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1005 20:21:44.711907  427001 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1005 20:21:44.715640  427001 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1005 20:21:44.715664  427001 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1005 20:21:44.715671  427001 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1005 20:21:44.715679  427001 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 20:21:44.715683  427001 command_runner.go:130] > Access: 2023-10-05 20:21:44.698528336 +0000
	I1005 20:21:44.715690  427001 command_runner.go:130] > Modify: 2023-10-05 20:21:44.698528336 +0000
	I1005 20:21:44.715695  427001 command_runner.go:130] > Change: 2023-10-05 20:21:44.698528336 +0000
	I1005 20:21:44.715699  427001 command_runner.go:130] >  Birth: -
	I1005 20:21:44.715718  427001 start.go:537] Will wait 60s for crictl version
	I1005 20:21:44.715776  427001 ssh_runner.go:195] Run: which crictl
	I1005 20:21:44.719694  427001 command_runner.go:130] > /usr/bin/crictl
	I1005 20:21:44.719782  427001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 20:21:44.751593  427001 command_runner.go:130] > Version:  0.1.0
	I1005 20:21:44.751613  427001 command_runner.go:130] > RuntimeName:  cri-o
	I1005 20:21:44.751618  427001 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1005 20:21:44.751623  427001 command_runner.go:130] > RuntimeApiVersion:  v1
	I1005 20:21:44.753914  427001 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1005 20:21:44.753996  427001 ssh_runner.go:195] Run: crio --version
	I1005 20:21:44.788871  427001 command_runner.go:130] > crio version 1.24.6
	I1005 20:21:44.788897  427001 command_runner.go:130] > Version:          1.24.6
	I1005 20:21:44.788908  427001 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1005 20:21:44.788915  427001 command_runner.go:130] > GitTreeState:     clean
	I1005 20:21:44.788923  427001 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1005 20:21:44.788930  427001 command_runner.go:130] > GoVersion:        go1.18.2
	I1005 20:21:44.788936  427001 command_runner.go:130] > Compiler:         gc
	I1005 20:21:44.788942  427001 command_runner.go:130] > Platform:         linux/amd64
	I1005 20:21:44.788963  427001 command_runner.go:130] > Linkmode:         dynamic
	I1005 20:21:44.788977  427001 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1005 20:21:44.788984  427001 command_runner.go:130] > SeccompEnabled:   true
	I1005 20:21:44.788989  427001 command_runner.go:130] > AppArmorEnabled:  false
	I1005 20:21:44.790744  427001 ssh_runner.go:195] Run: crio --version
	I1005 20:21:44.828932  427001 command_runner.go:130] > crio version 1.24.6
	I1005 20:21:44.828964  427001 command_runner.go:130] > Version:          1.24.6
	I1005 20:21:44.828976  427001 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1005 20:21:44.828984  427001 command_runner.go:130] > GitTreeState:     clean
	I1005 20:21:44.828993  427001 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1005 20:21:44.828999  427001 command_runner.go:130] > GoVersion:        go1.18.2
	I1005 20:21:44.829003  427001 command_runner.go:130] > Compiler:         gc
	I1005 20:21:44.829007  427001 command_runner.go:130] > Platform:         linux/amd64
	I1005 20:21:44.829021  427001 command_runner.go:130] > Linkmode:         dynamic
	I1005 20:21:44.829032  427001 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1005 20:21:44.829039  427001 command_runner.go:130] > SeccompEnabled:   true
	I1005 20:21:44.829048  427001 command_runner.go:130] > AppArmorEnabled:  false
	I1005 20:21:44.832143  427001 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1005 20:21:44.833571  427001 cli_runner.go:164] Run: docker network inspect multinode-401792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:21:44.851247  427001 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1005 20:21:44.855166  427001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:21:44.866525  427001 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 20:21:44.866594  427001 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 20:21:44.918125  427001 command_runner.go:130] > {
	I1005 20:21:44.918150  427001 command_runner.go:130] >   "images": [
	I1005 20:21:44.918157  427001 command_runner.go:130] >     {
	I1005 20:21:44.918171  427001 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1005 20:21:44.918179  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.918189  427001 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1005 20:21:44.918195  427001 command_runner.go:130] >       ],
	I1005 20:21:44.918203  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.918216  427001 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1005 20:21:44.918230  427001 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1005 20:21:44.918242  427001 command_runner.go:130] >       ],
	I1005 20:21:44.918251  427001 command_runner.go:130] >       "size": "65258016",
	I1005 20:21:44.918258  427001 command_runner.go:130] >       "uid": null,
	I1005 20:21:44.918268  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.918287  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.918300  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.918306  427001 command_runner.go:130] >     },
	I1005 20:21:44.918312  427001 command_runner.go:130] >     {
	I1005 20:21:44.918327  427001 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1005 20:21:44.918337  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.918349  427001 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1005 20:21:44.918358  427001 command_runner.go:130] >       ],
	I1005 20:21:44.918368  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.918381  427001 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1005 20:21:44.918396  427001 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1005 20:21:44.918407  427001 command_runner.go:130] >       ],
	I1005 20:21:44.918419  427001 command_runner.go:130] >       "size": "31470524",
	I1005 20:21:44.918429  427001 command_runner.go:130] >       "uid": null,
	I1005 20:21:44.918440  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.918450  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.918460  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.918469  427001 command_runner.go:130] >     },
	I1005 20:21:44.918480  427001 command_runner.go:130] >     {
	I1005 20:21:44.918493  427001 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1005 20:21:44.918504  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.918514  427001 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1005 20:21:44.918524  427001 command_runner.go:130] >       ],
	I1005 20:21:44.918534  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.918550  427001 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1005 20:21:44.918566  427001 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1005 20:21:44.918574  427001 command_runner.go:130] >       ],
	I1005 20:21:44.918578  427001 command_runner.go:130] >       "size": "53621675",
	I1005 20:21:44.918588  427001 command_runner.go:130] >       "uid": null,
	I1005 20:21:44.918598  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.918609  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.918618  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.918628  427001 command_runner.go:130] >     },
	I1005 20:21:44.918638  427001 command_runner.go:130] >     {
	I1005 20:21:44.918651  427001 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1005 20:21:44.918660  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.918667  427001 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1005 20:21:44.918676  427001 command_runner.go:130] >       ],
	I1005 20:21:44.918687  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.918699  427001 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1005 20:21:44.918715  427001 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1005 20:21:44.918730  427001 command_runner.go:130] >       ],
	I1005 20:21:44.918740  427001 command_runner.go:130] >       "size": "295456551",
	I1005 20:21:44.918750  427001 command_runner.go:130] >       "uid": {
	I1005 20:21:44.918758  427001 command_runner.go:130] >         "value": "0"
	I1005 20:21:44.918764  427001 command_runner.go:130] >       },
	I1005 20:21:44.918771  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.918781  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.918789  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.918799  427001 command_runner.go:130] >     },
	I1005 20:21:44.918808  427001 command_runner.go:130] >     {
	I1005 20:21:44.918821  427001 command_runner.go:130] >       "id": "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce",
	I1005 20:21:44.918831  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.918843  427001 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1005 20:21:44.918854  427001 command_runner.go:130] >       ],
	I1005 20:21:44.918863  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.918877  427001 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631",
	I1005 20:21:44.918893  427001 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1005 20:21:44.918903  427001 command_runner.go:130] >       ],
	I1005 20:21:44.918914  427001 command_runner.go:130] >       "size": "127149008",
	I1005 20:21:44.918929  427001 command_runner.go:130] >       "uid": {
	I1005 20:21:44.918939  427001 command_runner.go:130] >         "value": "0"
	I1005 20:21:44.918948  427001 command_runner.go:130] >       },
	I1005 20:21:44.918956  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.918964  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.918973  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.918983  427001 command_runner.go:130] >     },
	I1005 20:21:44.918990  427001 command_runner.go:130] >     {
	I1005 20:21:44.919004  427001 command_runner.go:130] >       "id": "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57",
	I1005 20:21:44.919014  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.919026  427001 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1005 20:21:44.919036  427001 command_runner.go:130] >       ],
	I1005 20:21:44.919046  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.919058  427001 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4",
	I1005 20:21:44.919087  427001 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"
	I1005 20:21:44.919095  427001 command_runner.go:130] >       ],
	I1005 20:21:44.919106  427001 command_runner.go:130] >       "size": "123171638",
	I1005 20:21:44.919116  427001 command_runner.go:130] >       "uid": {
	I1005 20:21:44.919126  427001 command_runner.go:130] >         "value": "0"
	I1005 20:21:44.919135  427001 command_runner.go:130] >       },
	I1005 20:21:44.919146  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.919155  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.919162  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.919167  427001 command_runner.go:130] >     },
	I1005 20:21:44.919176  427001 command_runner.go:130] >     {
	I1005 20:21:44.919190  427001 command_runner.go:130] >       "id": "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0",
	I1005 20:21:44.919201  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.919213  427001 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1005 20:21:44.919223  427001 command_runner.go:130] >       ],
	I1005 20:21:44.919233  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.919250  427001 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded",
	I1005 20:21:44.919261  427001 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"
	I1005 20:21:44.919270  427001 command_runner.go:130] >       ],
	I1005 20:21:44.919281  427001 command_runner.go:130] >       "size": "74687895",
	I1005 20:21:44.919288  427001 command_runner.go:130] >       "uid": null,
	I1005 20:21:44.919299  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.919309  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.919319  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.919327  427001 command_runner.go:130] >     },
	I1005 20:21:44.919333  427001 command_runner.go:130] >     {
	I1005 20:21:44.919350  427001 command_runner.go:130] >       "id": "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8",
	I1005 20:21:44.919359  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.919368  427001 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1005 20:21:44.919373  427001 command_runner.go:130] >       ],
	I1005 20:21:44.919380  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.919441  427001 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1005 20:21:44.919460  427001 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"
	I1005 20:21:44.919467  427001 command_runner.go:130] >       ],
	I1005 20:21:44.919479  427001 command_runner.go:130] >       "size": "61485878",
	I1005 20:21:44.919486  427001 command_runner.go:130] >       "uid": {
	I1005 20:21:44.919496  427001 command_runner.go:130] >         "value": "0"
	I1005 20:21:44.919502  427001 command_runner.go:130] >       },
	I1005 20:21:44.919511  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.919520  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.919530  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.919535  427001 command_runner.go:130] >     },
	I1005 20:21:44.919544  427001 command_runner.go:130] >     {
	I1005 20:21:44.919555  427001 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1005 20:21:44.919566  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.919577  427001 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1005 20:21:44.919584  427001 command_runner.go:130] >       ],
	I1005 20:21:44.919591  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.919603  427001 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1005 20:21:44.919614  427001 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1005 20:21:44.919617  427001 command_runner.go:130] >       ],
	I1005 20:21:44.919622  427001 command_runner.go:130] >       "size": "750414",
	I1005 20:21:44.919626  427001 command_runner.go:130] >       "uid": {
	I1005 20:21:44.919630  427001 command_runner.go:130] >         "value": "65535"
	I1005 20:21:44.919634  427001 command_runner.go:130] >       },
	I1005 20:21:44.919638  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.919642  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.919646  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.919649  427001 command_runner.go:130] >     }
	I1005 20:21:44.919652  427001 command_runner.go:130] >   ]
	I1005 20:21:44.919656  427001 command_runner.go:130] > }
	I1005 20:21:44.920634  427001 crio.go:496] all images are preloaded for cri-o runtime.
	I1005 20:21:44.920653  427001 crio.go:415] Images already preloaded, skipping extraction
	I1005 20:21:44.920699  427001 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 20:21:44.955454  427001 command_runner.go:130] > {
	I1005 20:21:44.955478  427001 command_runner.go:130] >   "images": [
	I1005 20:21:44.955484  427001 command_runner.go:130] >     {
	I1005 20:21:44.955496  427001 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1005 20:21:44.955503  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.955523  427001 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1005 20:21:44.955529  427001 command_runner.go:130] >       ],
	I1005 20:21:44.955536  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.955555  427001 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1005 20:21:44.955572  427001 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1005 20:21:44.955581  427001 command_runner.go:130] >       ],
	I1005 20:21:44.955590  427001 command_runner.go:130] >       "size": "65258016",
	I1005 20:21:44.955600  427001 command_runner.go:130] >       "uid": null,
	I1005 20:21:44.955612  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.955632  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.955642  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.955647  427001 command_runner.go:130] >     },
	I1005 20:21:44.955651  427001 command_runner.go:130] >     {
	I1005 20:21:44.955660  427001 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1005 20:21:44.955666  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.955674  427001 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1005 20:21:44.955681  427001 command_runner.go:130] >       ],
	I1005 20:21:44.955688  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.955707  427001 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1005 20:21:44.955719  427001 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1005 20:21:44.955725  427001 command_runner.go:130] >       ],
	I1005 20:21:44.955735  427001 command_runner.go:130] >       "size": "31470524",
	I1005 20:21:44.955742  427001 command_runner.go:130] >       "uid": null,
	I1005 20:21:44.955750  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.955757  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.955765  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.955770  427001 command_runner.go:130] >     },
	I1005 20:21:44.955776  427001 command_runner.go:130] >     {
	I1005 20:21:44.955787  427001 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1005 20:21:44.955798  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.955808  427001 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1005 20:21:44.955818  427001 command_runner.go:130] >       ],
	I1005 20:21:44.955827  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.955843  427001 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1005 20:21:44.955880  427001 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1005 20:21:44.955890  427001 command_runner.go:130] >       ],
	I1005 20:21:44.955901  427001 command_runner.go:130] >       "size": "53621675",
	I1005 20:21:44.955912  427001 command_runner.go:130] >       "uid": null,
	I1005 20:21:44.955923  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.955934  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.955948  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.955958  427001 command_runner.go:130] >     },
	I1005 20:21:44.955966  427001 command_runner.go:130] >     {
	I1005 20:21:44.955980  427001 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1005 20:21:44.955992  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.956004  427001 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1005 20:21:44.956014  427001 command_runner.go:130] >       ],
	I1005 20:21:44.956024  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.956040  427001 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1005 20:21:44.956056  427001 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1005 20:21:44.956075  427001 command_runner.go:130] >       ],
	I1005 20:21:44.956087  427001 command_runner.go:130] >       "size": "295456551",
	I1005 20:21:44.956095  427001 command_runner.go:130] >       "uid": {
	I1005 20:21:44.956105  427001 command_runner.go:130] >         "value": "0"
	I1005 20:21:44.956119  427001 command_runner.go:130] >       },
	I1005 20:21:44.956130  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.956139  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.956149  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.956156  427001 command_runner.go:130] >     },
	I1005 20:21:44.956165  427001 command_runner.go:130] >     {
	I1005 20:21:44.956180  427001 command_runner.go:130] >       "id": "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce",
	I1005 20:21:44.956190  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.956202  427001 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1005 20:21:44.956212  427001 command_runner.go:130] >       ],
	I1005 20:21:44.956220  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.956237  427001 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631",
	I1005 20:21:44.956254  427001 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1005 20:21:44.956263  427001 command_runner.go:130] >       ],
	I1005 20:21:44.956277  427001 command_runner.go:130] >       "size": "127149008",
	I1005 20:21:44.956287  427001 command_runner.go:130] >       "uid": {
	I1005 20:21:44.956299  427001 command_runner.go:130] >         "value": "0"
	I1005 20:21:44.956308  427001 command_runner.go:130] >       },
	I1005 20:21:44.956320  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.956331  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.956342  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.956353  427001 command_runner.go:130] >     },
	I1005 20:21:44.956361  427001 command_runner.go:130] >     {
	I1005 20:21:44.956373  427001 command_runner.go:130] >       "id": "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57",
	I1005 20:21:44.956384  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.956395  427001 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1005 20:21:44.956405  427001 command_runner.go:130] >       ],
	I1005 20:21:44.956415  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.956432  427001 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4",
	I1005 20:21:44.956448  427001 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"
	I1005 20:21:44.956458  427001 command_runner.go:130] >       ],
	I1005 20:21:44.956468  427001 command_runner.go:130] >       "size": "123171638",
	I1005 20:21:44.956479  427001 command_runner.go:130] >       "uid": {
	I1005 20:21:44.956489  427001 command_runner.go:130] >         "value": "0"
	I1005 20:21:44.956497  427001 command_runner.go:130] >       },
	I1005 20:21:44.956507  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.956520  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.956531  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.956538  427001 command_runner.go:130] >     },
	I1005 20:21:44.956548  427001 command_runner.go:130] >     {
	I1005 20:21:44.956559  427001 command_runner.go:130] >       "id": "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0",
	I1005 20:21:44.956570  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.956583  427001 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1005 20:21:44.956592  427001 command_runner.go:130] >       ],
	I1005 20:21:44.956600  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.956616  427001 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded",
	I1005 20:21:44.956633  427001 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"
	I1005 20:21:44.956642  427001 command_runner.go:130] >       ],
	I1005 20:21:44.956650  427001 command_runner.go:130] >       "size": "74687895",
	I1005 20:21:44.956661  427001 command_runner.go:130] >       "uid": null,
	I1005 20:21:44.956671  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.956681  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.956691  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.956700  427001 command_runner.go:130] >     },
	I1005 20:21:44.956712  427001 command_runner.go:130] >     {
	I1005 20:21:44.956727  427001 command_runner.go:130] >       "id": "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8",
	I1005 20:21:44.956737  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.956750  427001 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1005 20:21:44.956760  427001 command_runner.go:130] >       ],
	I1005 20:21:44.956769  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.956810  427001 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1005 20:21:44.956827  427001 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"
	I1005 20:21:44.956837  427001 command_runner.go:130] >       ],
	I1005 20:21:44.956847  427001 command_runner.go:130] >       "size": "61485878",
	I1005 20:21:44.956857  427001 command_runner.go:130] >       "uid": {
	I1005 20:21:44.956870  427001 command_runner.go:130] >         "value": "0"
	I1005 20:21:44.956880  427001 command_runner.go:130] >       },
	I1005 20:21:44.956888  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.956899  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.956909  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.956916  427001 command_runner.go:130] >     },
	I1005 20:21:44.956926  427001 command_runner.go:130] >     {
	I1005 20:21:44.956944  427001 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1005 20:21:44.956954  427001 command_runner.go:130] >       "repoTags": [
	I1005 20:21:44.956964  427001 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1005 20:21:44.956973  427001 command_runner.go:130] >       ],
	I1005 20:21:44.956981  427001 command_runner.go:130] >       "repoDigests": [
	I1005 20:21:44.956997  427001 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1005 20:21:44.957013  427001 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1005 20:21:44.957023  427001 command_runner.go:130] >       ],
	I1005 20:21:44.957034  427001 command_runner.go:130] >       "size": "750414",
	I1005 20:21:44.957045  427001 command_runner.go:130] >       "uid": {
	I1005 20:21:44.957055  427001 command_runner.go:130] >         "value": "65535"
	I1005 20:21:44.957064  427001 command_runner.go:130] >       },
	I1005 20:21:44.957073  427001 command_runner.go:130] >       "username": "",
	I1005 20:21:44.957081  427001 command_runner.go:130] >       "spec": null,
	I1005 20:21:44.957092  427001 command_runner.go:130] >       "pinned": false
	I1005 20:21:44.957099  427001 command_runner.go:130] >     }
	I1005 20:21:44.957108  427001 command_runner.go:130] >   ]
	I1005 20:21:44.957114  427001 command_runner.go:130] > }
	I1005 20:21:44.957255  427001 crio.go:496] all images are preloaded for cri-o runtime.
	I1005 20:21:44.957271  427001 cache_images.go:84] Images are preloaded, skipping loading
	I1005 20:21:44.957353  427001 ssh_runner.go:195] Run: crio config
	I1005 20:21:44.996165  427001 command_runner.go:130] ! time="2023-10-05 20:21:44.995681049Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1005 20:21:44.996193  427001 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1005 20:21:45.002008  427001 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1005 20:21:45.002042  427001 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1005 20:21:45.002049  427001 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1005 20:21:45.002053  427001 command_runner.go:130] > #
	I1005 20:21:45.002064  427001 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1005 20:21:45.002070  427001 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1005 20:21:45.002076  427001 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1005 20:21:45.002083  427001 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1005 20:21:45.002089  427001 command_runner.go:130] > # reload'.
	I1005 20:21:45.002096  427001 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1005 20:21:45.002105  427001 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1005 20:21:45.002111  427001 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1005 20:21:45.002120  427001 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1005 20:21:45.002126  427001 command_runner.go:130] > [crio]
	I1005 20:21:45.002137  427001 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1005 20:21:45.002145  427001 command_runner.go:130] > # containers images, in this directory.
	I1005 20:21:45.002157  427001 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1005 20:21:45.002166  427001 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1005 20:21:45.002174  427001 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1005 20:21:45.002181  427001 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1005 20:21:45.002189  427001 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1005 20:21:45.002195  427001 command_runner.go:130] > # storage_driver = "vfs"
	I1005 20:21:45.002203  427001 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1005 20:21:45.002209  427001 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1005 20:21:45.002215  427001 command_runner.go:130] > # storage_option = [
	I1005 20:21:45.002218  427001 command_runner.go:130] > # ]
	I1005 20:21:45.002227  427001 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1005 20:21:45.002235  427001 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1005 20:21:45.002242  427001 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1005 20:21:45.002248  427001 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1005 20:21:45.002257  427001 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1005 20:21:45.002264  427001 command_runner.go:130] > # always happen on a node reboot
	I1005 20:21:45.002271  427001 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1005 20:21:45.002279  427001 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1005 20:21:45.002287  427001 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1005 20:21:45.002300  427001 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1005 20:21:45.002308  427001 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1005 20:21:45.002318  427001 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1005 20:21:45.002328  427001 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1005 20:21:45.002334  427001 command_runner.go:130] > # internal_wipe = true
	I1005 20:21:45.002340  427001 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1005 20:21:45.002349  427001 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1005 20:21:45.002356  427001 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1005 20:21:45.002362  427001 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1005 20:21:45.002370  427001 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1005 20:21:45.002376  427001 command_runner.go:130] > [crio.api]
	I1005 20:21:45.002382  427001 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1005 20:21:45.002388  427001 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1005 20:21:45.002394  427001 command_runner.go:130] > # IP address on which the stream server will listen.
	I1005 20:21:45.002400  427001 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1005 20:21:45.002410  427001 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1005 20:21:45.002418  427001 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1005 20:21:45.002422  427001 command_runner.go:130] > # stream_port = "0"
	I1005 20:21:45.002431  427001 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1005 20:21:45.002435  427001 command_runner.go:130] > # stream_enable_tls = false
	I1005 20:21:45.002444  427001 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1005 20:21:45.002450  427001 command_runner.go:130] > # stream_idle_timeout = ""
	I1005 20:21:45.002457  427001 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1005 20:21:45.002465  427001 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1005 20:21:45.002469  427001 command_runner.go:130] > # minutes.
	I1005 20:21:45.002473  427001 command_runner.go:130] > # stream_tls_cert = ""
	I1005 20:21:45.002482  427001 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1005 20:21:45.002490  427001 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1005 20:21:45.002497  427001 command_runner.go:130] > # stream_tls_key = ""
	I1005 20:21:45.002503  427001 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1005 20:21:45.002511  427001 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1005 20:21:45.002521  427001 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1005 20:21:45.002527  427001 command_runner.go:130] > # stream_tls_ca = ""
	I1005 20:21:45.002537  427001 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1005 20:21:45.002544  427001 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1005 20:21:45.002551  427001 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1005 20:21:45.002558  427001 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1005 20:21:45.002587  427001 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1005 20:21:45.002598  427001 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1005 20:21:45.002602  427001 command_runner.go:130] > [crio.runtime]
	I1005 20:21:45.002608  427001 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1005 20:21:45.002613  427001 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1005 20:21:45.002620  427001 command_runner.go:130] > # "nofile=1024:2048"
	I1005 20:21:45.002626  427001 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1005 20:21:45.002632  427001 command_runner.go:130] > # default_ulimits = [
	I1005 20:21:45.002636  427001 command_runner.go:130] > # ]
	I1005 20:21:45.002644  427001 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1005 20:21:45.002652  427001 command_runner.go:130] > # no_pivot = false
	I1005 20:21:45.002658  427001 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1005 20:21:45.002666  427001 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1005 20:21:45.002673  427001 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1005 20:21:45.002682  427001 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1005 20:21:45.002690  427001 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1005 20:21:45.002696  427001 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1005 20:21:45.002702  427001 command_runner.go:130] > # conmon = ""
	I1005 20:21:45.002707  427001 command_runner.go:130] > # Cgroup setting for conmon
	I1005 20:21:45.002716  427001 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1005 20:21:45.002722  427001 command_runner.go:130] > conmon_cgroup = "pod"
	I1005 20:21:45.002728  427001 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1005 20:21:45.002735  427001 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1005 20:21:45.002742  427001 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1005 20:21:45.002748  427001 command_runner.go:130] > # conmon_env = [
	I1005 20:21:45.002752  427001 command_runner.go:130] > # ]
	I1005 20:21:45.002759  427001 command_runner.go:130] > # Additional environment variables to set for all the
	I1005 20:21:45.002765  427001 command_runner.go:130] > # containers. These are overridden if set in the
	I1005 20:21:45.002772  427001 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1005 20:21:45.002779  427001 command_runner.go:130] > # default_env = [
	I1005 20:21:45.002783  427001 command_runner.go:130] > # ]
	I1005 20:21:45.002791  427001 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1005 20:21:45.002800  427001 command_runner.go:130] > # selinux = false
	I1005 20:21:45.002809  427001 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1005 20:21:45.002818  427001 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1005 20:21:45.002826  427001 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1005 20:21:45.002833  427001 command_runner.go:130] > # seccomp_profile = ""
	I1005 20:21:45.002838  427001 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1005 20:21:45.002846  427001 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1005 20:21:45.002853  427001 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1005 20:21:45.002858  427001 command_runner.go:130] > # which might increase security.
	I1005 20:21:45.002865  427001 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1005 20:21:45.002871  427001 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1005 20:21:45.002880  427001 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1005 20:21:45.002888  427001 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1005 20:21:45.002894  427001 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1005 20:21:45.002902  427001 command_runner.go:130] > # This option supports live configuration reload.
	I1005 20:21:45.002906  427001 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1005 20:21:45.002924  427001 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1005 20:21:45.002931  427001 command_runner.go:130] > # the cgroup blockio controller.
	I1005 20:21:45.002939  427001 command_runner.go:130] > # blockio_config_file = ""
	I1005 20:21:45.002948  427001 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1005 20:21:45.002955  427001 command_runner.go:130] > # irqbalance daemon.
	I1005 20:21:45.002960  427001 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1005 20:21:45.002968  427001 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1005 20:21:45.002976  427001 command_runner.go:130] > # This option supports live configuration reload.
	I1005 20:21:45.002980  427001 command_runner.go:130] > # rdt_config_file = ""
	I1005 20:21:45.002986  427001 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1005 20:21:45.002992  427001 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1005 20:21:45.002998  427001 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1005 20:21:45.003005  427001 command_runner.go:130] > # separate_pull_cgroup = ""
	I1005 20:21:45.003012  427001 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1005 20:21:45.003021  427001 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1005 20:21:45.003025  427001 command_runner.go:130] > # will be added.
	I1005 20:21:45.003031  427001 command_runner.go:130] > # default_capabilities = [
	I1005 20:21:45.003035  427001 command_runner.go:130] > # 	"CHOWN",
	I1005 20:21:45.003042  427001 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1005 20:21:45.003046  427001 command_runner.go:130] > # 	"FSETID",
	I1005 20:21:45.003055  427001 command_runner.go:130] > # 	"FOWNER",
	I1005 20:21:45.003083  427001 command_runner.go:130] > # 	"SETGID",
	I1005 20:21:45.003094  427001 command_runner.go:130] > # 	"SETUID",
	I1005 20:21:45.003101  427001 command_runner.go:130] > # 	"SETPCAP",
	I1005 20:21:45.003106  427001 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1005 20:21:45.003111  427001 command_runner.go:130] > # 	"KILL",
	I1005 20:21:45.003114  427001 command_runner.go:130] > # ]
	I1005 20:21:45.003124  427001 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1005 20:21:45.003136  427001 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1005 20:21:45.003143  427001 command_runner.go:130] > # add_inheritable_capabilities = true
	I1005 20:21:45.003149  427001 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1005 20:21:45.003157  427001 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1005 20:21:45.003164  427001 command_runner.go:130] > # default_sysctls = [
	I1005 20:21:45.003168  427001 command_runner.go:130] > # ]
	I1005 20:21:45.003173  427001 command_runner.go:130] > # List of devices on the host that a
	I1005 20:21:45.003182  427001 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1005 20:21:45.003188  427001 command_runner.go:130] > # allowed_devices = [
	I1005 20:21:45.003192  427001 command_runner.go:130] > # 	"/dev/fuse",
	I1005 20:21:45.003203  427001 command_runner.go:130] > # ]
	I1005 20:21:45.003212  427001 command_runner.go:130] > # List of additional devices. specified as
	I1005 20:21:45.003246  427001 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1005 20:21:45.003254  427001 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1005 20:21:45.003260  427001 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1005 20:21:45.003267  427001 command_runner.go:130] > # additional_devices = [
	I1005 20:21:45.003271  427001 command_runner.go:130] > # ]
	I1005 20:21:45.003278  427001 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1005 20:21:45.003282  427001 command_runner.go:130] > # cdi_spec_dirs = [
	I1005 20:21:45.003289  427001 command_runner.go:130] > # 	"/etc/cdi",
	I1005 20:21:45.003293  427001 command_runner.go:130] > # 	"/var/run/cdi",
	I1005 20:21:45.003299  427001 command_runner.go:130] > # ]
	I1005 20:21:45.003306  427001 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1005 20:21:45.003314  427001 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1005 20:21:45.003320  427001 command_runner.go:130] > # Defaults to false.
	I1005 20:21:45.003326  427001 command_runner.go:130] > # device_ownership_from_security_context = false
	I1005 20:21:45.003334  427001 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1005 20:21:45.003342  427001 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1005 20:21:45.003349  427001 command_runner.go:130] > # hooks_dir = [
	I1005 20:21:45.003356  427001 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1005 20:21:45.003360  427001 command_runner.go:130] > # ]
	I1005 20:21:45.003370  427001 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1005 20:21:45.003379  427001 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1005 20:21:45.003387  427001 command_runner.go:130] > # its default mounts from the following two files:
	I1005 20:21:45.003390  427001 command_runner.go:130] > #
	I1005 20:21:45.003399  427001 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1005 20:21:45.003405  427001 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1005 20:21:45.003414  427001 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1005 20:21:45.003420  427001 command_runner.go:130] > #
	I1005 20:21:45.003426  427001 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1005 20:21:45.003434  427001 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1005 20:21:45.003443  427001 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1005 20:21:45.003450  427001 command_runner.go:130] > #      only add mounts it finds in this file.
	I1005 20:21:45.003454  427001 command_runner.go:130] > #
	I1005 20:21:45.003461  427001 command_runner.go:130] > # default_mounts_file = ""
	I1005 20:21:45.003467  427001 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1005 20:21:45.003479  427001 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1005 20:21:45.003485  427001 command_runner.go:130] > # pids_limit = 0
	I1005 20:21:45.003492  427001 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1005 20:21:45.003500  427001 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1005 20:21:45.003508  427001 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1005 20:21:45.003517  427001 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1005 20:21:45.003524  427001 command_runner.go:130] > # log_size_max = -1
	I1005 20:21:45.003531  427001 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1005 20:21:45.003537  427001 command_runner.go:130] > # log_to_journald = false
	I1005 20:21:45.003544  427001 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1005 20:21:45.003551  427001 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1005 20:21:45.003556  427001 command_runner.go:130] > # Path to directory for container attach sockets.
	I1005 20:21:45.003563  427001 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1005 20:21:45.003569  427001 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1005 20:21:45.003575  427001 command_runner.go:130] > # bind_mount_prefix = ""
	I1005 20:21:45.003581  427001 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1005 20:21:45.003587  427001 command_runner.go:130] > # read_only = false
	I1005 20:21:45.003594  427001 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1005 20:21:45.003604  427001 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1005 20:21:45.003611  427001 command_runner.go:130] > # live configuration reload.
	I1005 20:21:45.003617  427001 command_runner.go:130] > # log_level = "info"
	I1005 20:21:45.003625  427001 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1005 20:21:45.003633  427001 command_runner.go:130] > # This option supports live configuration reload.
	I1005 20:21:45.003637  427001 command_runner.go:130] > # log_filter = ""
	I1005 20:21:45.003645  427001 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1005 20:21:45.003654  427001 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1005 20:21:45.003660  427001 command_runner.go:130] > # separated by comma.
	I1005 20:21:45.003666  427001 command_runner.go:130] > # uid_mappings = ""
	I1005 20:21:45.003674  427001 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1005 20:21:45.003683  427001 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1005 20:21:45.003689  427001 command_runner.go:130] > # separated by comma.
	I1005 20:21:45.003694  427001 command_runner.go:130] > # gid_mappings = ""
	I1005 20:21:45.003702  427001 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1005 20:21:45.003711  427001 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1005 20:21:45.003719  427001 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1005 20:21:45.003724  427001 command_runner.go:130] > # minimum_mappable_uid = -1
	I1005 20:21:45.003731  427001 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1005 20:21:45.003740  427001 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1005 20:21:45.003746  427001 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1005 20:21:45.003752  427001 command_runner.go:130] > # minimum_mappable_gid = -1
	I1005 20:21:45.003758  427001 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1005 20:21:45.003767  427001 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1005 20:21:45.003775  427001 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1005 20:21:45.003783  427001 command_runner.go:130] > # ctr_stop_timeout = 30
	I1005 20:21:45.003789  427001 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1005 20:21:45.003800  427001 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1005 20:21:45.003807  427001 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1005 20:21:45.003812  427001 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1005 20:21:45.003819  427001 command_runner.go:130] > # drop_infra_ctr = true
	I1005 20:21:45.003825  427001 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1005 20:21:45.003833  427001 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1005 20:21:45.003840  427001 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1005 20:21:45.003847  427001 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1005 20:21:45.003854  427001 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1005 20:21:45.003862  427001 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1005 20:21:45.003869  427001 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1005 20:21:45.003876  427001 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1005 20:21:45.003883  427001 command_runner.go:130] > # pinns_path = ""
	I1005 20:21:45.003890  427001 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1005 20:21:45.003899  427001 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1005 20:21:45.003908  427001 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1005 20:21:45.003919  427001 command_runner.go:130] > # default_runtime = "runc"
	I1005 20:21:45.003927  427001 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1005 20:21:45.003935  427001 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1005 20:21:45.003946  427001 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1005 20:21:45.003954  427001 command_runner.go:130] > # creation as a file is not desired either.
	I1005 20:21:45.003962  427001 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1005 20:21:45.003971  427001 command_runner.go:130] > # the hostname is being managed dynamically.
	I1005 20:21:45.003979  427001 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1005 20:21:45.003982  427001 command_runner.go:130] > # ]
	I1005 20:21:45.003988  427001 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1005 20:21:45.003997  427001 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1005 20:21:45.004008  427001 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1005 20:21:45.004016  427001 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1005 20:21:45.004022  427001 command_runner.go:130] > #
	I1005 20:21:45.004027  427001 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1005 20:21:45.004035  427001 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1005 20:21:45.004039  427001 command_runner.go:130] > #  runtime_type = "oci"
	I1005 20:21:45.004047  427001 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1005 20:21:45.004052  427001 command_runner.go:130] > #  privileged_without_host_devices = false
	I1005 20:21:45.004059  427001 command_runner.go:130] > #  allowed_annotations = []
	I1005 20:21:45.004063  427001 command_runner.go:130] > # Where:
	I1005 20:21:45.004070  427001 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1005 20:21:45.004077  427001 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1005 20:21:45.004086  427001 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1005 20:21:45.004094  427001 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1005 20:21:45.004100  427001 command_runner.go:130] > #   in $PATH.
	I1005 20:21:45.004107  427001 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1005 20:21:45.004114  427001 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1005 20:21:45.004120  427001 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1005 20:21:45.004126  427001 command_runner.go:130] > #   state.
	I1005 20:21:45.004133  427001 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1005 20:21:45.004141  427001 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1005 20:21:45.004147  427001 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1005 20:21:45.004155  427001 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1005 20:21:45.004164  427001 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1005 20:21:45.004173  427001 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1005 20:21:45.004178  427001 command_runner.go:130] > #   The currently recognized values are:
	I1005 20:21:45.004186  427001 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1005 20:21:45.004195  427001 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1005 20:21:45.004203  427001 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1005 20:21:45.004211  427001 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1005 20:21:45.004221  427001 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1005 20:21:45.004229  427001 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1005 20:21:45.004236  427001 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1005 20:21:45.004244  427001 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1005 20:21:45.004251  427001 command_runner.go:130] > #   should be moved to the container's cgroup
	I1005 20:21:45.004256  427001 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1005 20:21:45.004264  427001 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1005 20:21:45.004271  427001 command_runner.go:130] > runtime_type = "oci"
	I1005 20:21:45.004275  427001 command_runner.go:130] > runtime_root = "/run/runc"
	I1005 20:21:45.004281  427001 command_runner.go:130] > runtime_config_path = ""
	I1005 20:21:45.004286  427001 command_runner.go:130] > monitor_path = ""
	I1005 20:21:45.004292  427001 command_runner.go:130] > monitor_cgroup = ""
	I1005 20:21:45.004296  427001 command_runner.go:130] > monitor_exec_cgroup = ""
	I1005 20:21:45.004329  427001 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1005 20:21:45.004336  427001 command_runner.go:130] > # running containers
	I1005 20:21:45.004341  427001 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1005 20:21:45.004347  427001 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1005 20:21:45.004356  427001 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1005 20:21:45.004365  427001 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1005 20:21:45.004372  427001 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1005 20:21:45.004377  427001 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1005 20:21:45.004384  427001 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1005 20:21:45.004389  427001 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1005 20:21:45.004396  427001 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1005 20:21:45.004401  427001 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1005 20:21:45.004410  427001 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1005 20:21:45.004417  427001 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1005 20:21:45.004425  427001 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1005 20:21:45.004432  427001 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1005 20:21:45.004442  427001 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1005 20:21:45.004450  427001 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1005 20:21:45.004461  427001 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1005 20:21:45.004471  427001 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1005 20:21:45.004480  427001 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1005 20:21:45.004489  427001 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1005 20:21:45.004495  427001 command_runner.go:130] > # Example:
	I1005 20:21:45.004500  427001 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1005 20:21:45.004507  427001 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1005 20:21:45.004512  427001 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1005 20:21:45.004520  427001 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1005 20:21:45.004523  427001 command_runner.go:130] > # cpuset = 0
	I1005 20:21:45.004528  427001 command_runner.go:130] > # cpushares = "0-1"
	I1005 20:21:45.004533  427001 command_runner.go:130] > # Where:
	I1005 20:21:45.004541  427001 command_runner.go:130] > # The workload name is workload-type.
	I1005 20:21:45.004548  427001 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1005 20:21:45.004556  427001 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1005 20:21:45.004562  427001 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1005 20:21:45.004572  427001 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1005 20:21:45.004581  427001 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1005 20:21:45.004586  427001 command_runner.go:130] > # 
	I1005 20:21:45.004593  427001 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1005 20:21:45.004599  427001 command_runner.go:130] > #
	I1005 20:21:45.004605  427001 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1005 20:21:45.004614  427001 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1005 20:21:45.004622  427001 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1005 20:21:45.004631  427001 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1005 20:21:45.004637  427001 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1005 20:21:45.004643  427001 command_runner.go:130] > [crio.image]
	I1005 20:21:45.004649  427001 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1005 20:21:45.004656  427001 command_runner.go:130] > # default_transport = "docker://"
	I1005 20:21:45.004662  427001 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1005 20:21:45.004671  427001 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1005 20:21:45.004678  427001 command_runner.go:130] > # global_auth_file = ""
	I1005 20:21:45.004683  427001 command_runner.go:130] > # The image used to instantiate infra containers.
	I1005 20:21:45.004690  427001 command_runner.go:130] > # This option supports live configuration reload.
	I1005 20:21:45.004695  427001 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1005 20:21:45.004704  427001 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1005 20:21:45.004711  427001 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1005 20:21:45.004720  427001 command_runner.go:130] > # This option supports live configuration reload.
	I1005 20:21:45.004728  427001 command_runner.go:130] > # pause_image_auth_file = ""
	I1005 20:21:45.004733  427001 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1005 20:21:45.004742  427001 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1005 20:21:45.004750  427001 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1005 20:21:45.004758  427001 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1005 20:21:45.004765  427001 command_runner.go:130] > # pause_command = "/pause"
	I1005 20:21:45.004771  427001 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1005 20:21:45.004779  427001 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1005 20:21:45.004786  427001 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1005 20:21:45.004796  427001 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1005 20:21:45.004803  427001 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1005 20:21:45.004810  427001 command_runner.go:130] > # signature_policy = ""
	I1005 20:21:45.004821  427001 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1005 20:21:45.004830  427001 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1005 20:21:45.004836  427001 command_runner.go:130] > # changing them here.
	I1005 20:21:45.004841  427001 command_runner.go:130] > # insecure_registries = [
	I1005 20:21:45.004848  427001 command_runner.go:130] > # ]
	I1005 20:21:45.004855  427001 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1005 20:21:45.004865  427001 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1005 20:21:45.004872  427001 command_runner.go:130] > # image_volumes = "mkdir"
	I1005 20:21:45.004878  427001 command_runner.go:130] > # Temporary directory to use for storing big files
	I1005 20:21:45.004885  427001 command_runner.go:130] > # big_files_temporary_dir = ""
	I1005 20:21:45.004891  427001 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1005 20:21:45.004898  427001 command_runner.go:130] > # CNI plugins.
	I1005 20:21:45.004902  427001 command_runner.go:130] > [crio.network]
	I1005 20:21:45.004911  427001 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1005 20:21:45.004922  427001 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1005 20:21:45.004929  427001 command_runner.go:130] > # cni_default_network = ""
	I1005 20:21:45.004935  427001 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1005 20:21:45.004942  427001 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1005 20:21:45.004947  427001 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1005 20:21:45.004954  427001 command_runner.go:130] > # plugin_dirs = [
	I1005 20:21:45.004958  427001 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1005 20:21:45.004964  427001 command_runner.go:130] > # ]
	I1005 20:21:45.004970  427001 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1005 20:21:45.004976  427001 command_runner.go:130] > [crio.metrics]
	I1005 20:21:45.004982  427001 command_runner.go:130] > # Globally enable or disable metrics support.
	I1005 20:21:45.004988  427001 command_runner.go:130] > # enable_metrics = false
	I1005 20:21:45.004993  427001 command_runner.go:130] > # Specify enabled metrics collectors.
	I1005 20:21:45.005000  427001 command_runner.go:130] > # Per default all metrics are enabled.
	I1005 20:21:45.005007  427001 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1005 20:21:45.005015  427001 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1005 20:21:45.005021  427001 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1005 20:21:45.005027  427001 command_runner.go:130] > # metrics_collectors = [
	I1005 20:21:45.005032  427001 command_runner.go:130] > # 	"operations",
	I1005 20:21:45.005040  427001 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1005 20:21:45.005047  427001 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1005 20:21:45.005052  427001 command_runner.go:130] > # 	"operations_errors",
	I1005 20:21:45.005058  427001 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1005 20:21:45.005063  427001 command_runner.go:130] > # 	"image_pulls_by_name",
	I1005 20:21:45.005072  427001 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1005 20:21:45.005078  427001 command_runner.go:130] > # 	"image_pulls_failures",
	I1005 20:21:45.005084  427001 command_runner.go:130] > # 	"image_pulls_successes",
	I1005 20:21:45.005091  427001 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1005 20:21:45.005095  427001 command_runner.go:130] > # 	"image_layer_reuse",
	I1005 20:21:45.005102  427001 command_runner.go:130] > # 	"containers_oom_total",
	I1005 20:21:45.005106  427001 command_runner.go:130] > # 	"containers_oom",
	I1005 20:21:45.005110  427001 command_runner.go:130] > # 	"processes_defunct",
	I1005 20:21:45.005117  427001 command_runner.go:130] > # 	"operations_total",
	I1005 20:21:45.005121  427001 command_runner.go:130] > # 	"operations_latency_seconds",
	I1005 20:21:45.005128  427001 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1005 20:21:45.005133  427001 command_runner.go:130] > # 	"operations_errors_total",
	I1005 20:21:45.005139  427001 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1005 20:21:45.005144  427001 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1005 20:21:45.005151  427001 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1005 20:21:45.005156  427001 command_runner.go:130] > # 	"image_pulls_success_total",
	I1005 20:21:45.005162  427001 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1005 20:21:45.005167  427001 command_runner.go:130] > # 	"containers_oom_count_total",
	I1005 20:21:45.005172  427001 command_runner.go:130] > # ]
	I1005 20:21:45.005178  427001 command_runner.go:130] > # The port on which the metrics server will listen.
	I1005 20:21:45.005184  427001 command_runner.go:130] > # metrics_port = 9090
	I1005 20:21:45.005189  427001 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1005 20:21:45.005196  427001 command_runner.go:130] > # metrics_socket = ""
	I1005 20:21:45.005201  427001 command_runner.go:130] > # The certificate for the secure metrics server.
	I1005 20:21:45.005210  427001 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1005 20:21:45.005216  427001 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1005 20:21:45.005223  427001 command_runner.go:130] > # certificate on any modification event.
	I1005 20:21:45.005227  427001 command_runner.go:130] > # metrics_cert = ""
	I1005 20:21:45.005235  427001 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1005 20:21:45.005240  427001 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1005 20:21:45.005247  427001 command_runner.go:130] > # metrics_key = ""
	I1005 20:21:45.005254  427001 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1005 20:21:45.005260  427001 command_runner.go:130] > [crio.tracing]
	I1005 20:21:45.005266  427001 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1005 20:21:45.005273  427001 command_runner.go:130] > # enable_tracing = false
	I1005 20:21:45.005279  427001 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1005 20:21:45.005285  427001 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1005 20:21:45.005291  427001 command_runner.go:130] > # Number of samples to collect per million spans.
	I1005 20:21:45.005298  427001 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1005 20:21:45.005304  427001 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1005 20:21:45.005310  427001 command_runner.go:130] > [crio.stats]
	I1005 20:21:45.005316  427001 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1005 20:21:45.005324  427001 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1005 20:21:45.005331  427001 command_runner.go:130] > # stats_collection_period = 0
	I1005 20:21:45.005413  427001 cni.go:84] Creating CNI manager for ""
	I1005 20:21:45.005430  427001 cni.go:136] 1 nodes found, recommending kindnet
	I1005 20:21:45.005449  427001 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 20:21:45.005472  427001 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-401792 NodeName:multinode-401792 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 20:21:45.005619  427001 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-401792"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 20:21:45.005707  427001 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-401792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-401792 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 20:21:45.005766  427001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 20:21:45.014407  427001 command_runner.go:130] > kubeadm
	I1005 20:21:45.014440  427001 command_runner.go:130] > kubectl
	I1005 20:21:45.014446  427001 command_runner.go:130] > kubelet
	I1005 20:21:45.015145  427001 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 20:21:45.015234  427001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 20:21:45.024493  427001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1005 20:21:45.042332  427001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 20:21:45.060723  427001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1005 20:21:45.078546  427001 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1005 20:21:45.082176  427001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:21:45.093488  427001 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792 for IP: 192.168.58.2
	I1005 20:21:45.093545  427001 certs.go:190] acquiring lock for shared ca certs: {Name:mk1be6ef34f8fc4cfa2ec636f9e6906c15e2096a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:21:45.093712  427001 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key
	I1005 20:21:45.093749  427001 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key
	I1005 20:21:45.093798  427001 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.key
	I1005 20:21:45.093811  427001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.crt with IP's: []
	I1005 20:21:45.361102  427001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.crt ...
	I1005 20:21:45.361137  427001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.crt: {Name:mk54fd6e1d39ede0a5fbae608411b9bcb2461200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:21:45.361315  427001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.key ...
	I1005 20:21:45.361326  427001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.key: {Name:mkc9654e714d1136e2358a6f7e47f89775b2d343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:21:45.361415  427001 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.key.cee25041
	I1005 20:21:45.361431  427001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1005 20:21:45.411617  427001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.crt.cee25041 ...
	I1005 20:21:45.411651  427001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.crt.cee25041: {Name:mk01d8c2b475a8a2cdcd5df6dae268851903dba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:21:45.411810  427001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.key.cee25041 ...
	I1005 20:21:45.411822  427001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.key.cee25041: {Name:mk1f515a4688969735c84ffd21661dde0fe258d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:21:45.411890  427001 certs.go:337] copying /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.crt
	I1005 20:21:45.411998  427001 certs.go:341] copying /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.key
	I1005 20:21:45.412060  427001 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/proxy-client.key
	I1005 20:21:45.412077  427001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/proxy-client.crt with IP's: []
	I1005 20:21:45.618397  427001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/proxy-client.crt ...
	I1005 20:21:45.618437  427001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/proxy-client.crt: {Name:mk53396301141f9ec256df03d94fa164f4e58793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:21:45.618633  427001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/proxy-client.key ...
	I1005 20:21:45.618645  427001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/proxy-client.key: {Name:mk1150d6c836aaaf9c3cbce5904208e410c49b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:21:45.618723  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1005 20:21:45.618743  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1005 20:21:45.618753  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1005 20:21:45.618765  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1005 20:21:45.618775  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1005 20:21:45.618789  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1005 20:21:45.618799  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1005 20:21:45.618812  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1005 20:21:45.618867  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929.pem (1338 bytes)
	W1005 20:21:45.618906  427001 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929_empty.pem, impossibly tiny 0 bytes
	I1005 20:21:45.618918  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 20:21:45.618940  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem (1078 bytes)
	I1005 20:21:45.618978  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem (1123 bytes)
	I1005 20:21:45.619005  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem (1675 bytes)
	I1005 20:21:45.619044  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem (1708 bytes)
	I1005 20:21:45.619091  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:21:45.619107  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929.pem -> /usr/share/ca-certificates/340929.pem
	I1005 20:21:45.619121  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem -> /usr/share/ca-certificates/3409292.pem
	I1005 20:21:45.619735  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 20:21:45.644185  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1005 20:21:45.668654  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 20:21:45.693438  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1005 20:21:45.718030  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 20:21:45.742544  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1005 20:21:45.766708  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 20:21:45.791045  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 20:21:45.815302  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 20:21:45.839849  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929.pem --> /usr/share/ca-certificates/340929.pem (1338 bytes)
	I1005 20:21:45.864282  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem --> /usr/share/ca-certificates/3409292.pem (1708 bytes)
	I1005 20:21:45.888760  427001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 20:21:45.907039  427001 ssh_runner.go:195] Run: openssl version
	I1005 20:21:45.912559  427001 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1005 20:21:45.912637  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 20:21:45.922629  427001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:21:45.926468  427001 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  5 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:21:45.926504  427001 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:21:45.926549  427001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:21:45.933548  427001 command_runner.go:130] > b5213941
	I1005 20:21:45.933646  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 20:21:45.943549  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340929.pem && ln -fs /usr/share/ca-certificates/340929.pem /etc/ssl/certs/340929.pem"
	I1005 20:21:45.953237  427001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340929.pem
	I1005 20:21:45.957098  427001 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  5 20:09 /usr/share/ca-certificates/340929.pem
	I1005 20:21:45.957141  427001 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 20:09 /usr/share/ca-certificates/340929.pem
	I1005 20:21:45.957192  427001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340929.pem
	I1005 20:21:45.964068  427001 command_runner.go:130] > 51391683
	I1005 20:21:45.964277  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340929.pem /etc/ssl/certs/51391683.0"
	I1005 20:21:45.974235  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3409292.pem && ln -fs /usr/share/ca-certificates/3409292.pem /etc/ssl/certs/3409292.pem"
	I1005 20:21:45.984241  427001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3409292.pem
	I1005 20:21:45.987989  427001 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  5 20:09 /usr/share/ca-certificates/3409292.pem
	I1005 20:21:45.988017  427001 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 20:09 /usr/share/ca-certificates/3409292.pem
	I1005 20:21:45.988057  427001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3409292.pem
	I1005 20:21:45.994812  427001 command_runner.go:130] > 3ec20f2e
	I1005 20:21:45.995058  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3409292.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 20:21:46.004951  427001 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 20:21:46.008503  427001 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 20:21:46.008544  427001 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 20:21:46.008585  427001 kubeadm.go:404] StartCluster: {Name:multinode-401792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-401792 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:21:46.008659  427001 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1005 20:21:46.008704  427001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 20:21:46.045269  427001 cri.go:89] found id: ""
	I1005 20:21:46.045335  427001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 20:21:46.054355  427001 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1005 20:21:46.054382  427001 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1005 20:21:46.054388  427001 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1005 20:21:46.054470  427001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 20:21:46.063663  427001 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1005 20:21:46.063732  427001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 20:21:46.072772  427001 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1005 20:21:46.072799  427001 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1005 20:21:46.072806  427001 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1005 20:21:46.072816  427001 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 20:21:46.072859  427001 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 20:21:46.072899  427001 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1005 20:21:46.122247  427001 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1005 20:21:46.122284  427001 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I1005 20:21:46.122333  427001 kubeadm.go:322] [preflight] Running pre-flight checks
	I1005 20:21:46.122344  427001 command_runner.go:130] > [preflight] Running pre-flight checks
	I1005 20:21:46.160319  427001 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1005 20:21:46.160353  427001 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1005 20:21:46.160425  427001 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-gcp
	I1005 20:21:46.160460  427001 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1044-gcp
	I1005 20:21:46.160535  427001 kubeadm.go:322] OS: Linux
	I1005 20:21:46.160545  427001 command_runner.go:130] > OS: Linux
	I1005 20:21:46.160613  427001 kubeadm.go:322] CGROUPS_CPU: enabled
	I1005 20:21:46.160641  427001 command_runner.go:130] > CGROUPS_CPU: enabled
	I1005 20:21:46.160724  427001 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1005 20:21:46.160731  427001 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1005 20:21:46.160775  427001 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1005 20:21:46.160782  427001 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1005 20:21:46.160823  427001 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1005 20:21:46.160838  427001 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1005 20:21:46.160911  427001 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1005 20:21:46.160932  427001 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1005 20:21:46.161010  427001 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1005 20:21:46.161020  427001 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1005 20:21:46.161057  427001 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1005 20:21:46.161064  427001 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1005 20:21:46.161114  427001 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1005 20:21:46.161125  427001 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1005 20:21:46.161179  427001 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1005 20:21:46.161188  427001 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1005 20:21:46.230149  427001 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 20:21:46.230177  427001 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 20:21:46.230270  427001 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 20:21:46.230279  427001 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 20:21:46.230380  427001 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 20:21:46.230398  427001 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 20:21:46.446121  427001 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 20:21:46.449534  427001 out.go:204]   - Generating certificates and keys ...
	I1005 20:21:46.446189  427001 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 20:21:46.449782  427001 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1005 20:21:46.449804  427001 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1005 20:21:46.449890  427001 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1005 20:21:46.449901  427001 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1005 20:21:46.604034  427001 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 20:21:46.604067  427001 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 20:21:46.840481  427001 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1005 20:21:46.840510  427001 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1005 20:21:46.930673  427001 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1005 20:21:46.930731  427001 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1005 20:21:47.179373  427001 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1005 20:21:47.179407  427001 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1005 20:21:47.579584  427001 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1005 20:21:47.579612  427001 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1005 20:21:47.579756  427001 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-401792] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1005 20:21:47.579786  427001 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-401792] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1005 20:21:47.881258  427001 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1005 20:21:47.881285  427001 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1005 20:21:47.881437  427001 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-401792] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1005 20:21:47.881448  427001 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-401792] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1005 20:21:48.106118  427001 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 20:21:48.106147  427001 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 20:21:48.231209  427001 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 20:21:48.231255  427001 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 20:21:48.512977  427001 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1005 20:21:48.513027  427001 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1005 20:21:48.513631  427001 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 20:21:48.513657  427001 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 20:21:48.622882  427001 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 20:21:48.622918  427001 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 20:21:48.917331  427001 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 20:21:48.917431  427001 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 20:21:49.104832  427001 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 20:21:49.104863  427001 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 20:21:49.378999  427001 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 20:21:49.379036  427001 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 20:21:49.379396  427001 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 20:21:49.379429  427001 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 20:21:49.382038  427001 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 20:21:49.384702  427001 out.go:204]   - Booting up control plane ...
	I1005 20:21:49.382139  427001 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 20:21:49.384842  427001 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 20:21:49.384869  427001 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 20:21:49.385052  427001 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 20:21:49.385077  427001 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 20:21:49.385217  427001 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 20:21:49.385231  427001 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 20:21:49.394447  427001 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 20:21:49.394491  427001 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 20:21:49.395182  427001 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 20:21:49.395215  427001 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 20:21:49.395297  427001 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1005 20:21:49.395313  427001 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1005 20:21:49.478005  427001 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 20:21:49.478057  427001 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 20:21:54.480202  427001 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002289 seconds
	I1005 20:21:54.480231  427001 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002289 seconds
	I1005 20:21:54.480390  427001 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 20:21:54.480410  427001 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 20:21:54.495580  427001 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 20:21:54.495621  427001 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 20:21:55.019249  427001 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1005 20:21:55.019291  427001 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1005 20:21:55.019497  427001 kubeadm.go:322] [mark-control-plane] Marking the node multinode-401792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1005 20:21:55.019509  427001 command_runner.go:130] > [mark-control-plane] Marking the node multinode-401792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1005 20:21:55.531365  427001 kubeadm.go:322] [bootstrap-token] Using token: ams3ii.28hv5uhrc9igjme4
	I1005 20:21:55.533138  427001 out.go:204]   - Configuring RBAC rules ...
	I1005 20:21:55.531445  427001 command_runner.go:130] > [bootstrap-token] Using token: ams3ii.28hv5uhrc9igjme4
	I1005 20:21:55.533288  427001 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 20:21:55.533322  427001 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 20:21:55.538725  427001 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 20:21:55.538757  427001 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 20:21:55.547020  427001 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 20:21:55.547053  427001 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 20:21:55.550546  427001 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 20:21:55.550581  427001 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 20:21:55.554544  427001 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 20:21:55.554563  427001 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 20:21:55.559579  427001 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 20:21:55.559618  427001 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 20:21:55.571820  427001 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 20:21:55.571846  427001 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 20:21:55.778943  427001 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1005 20:21:55.778973  427001 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1005 20:21:55.945808  427001 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1005 20:21:55.945893  427001 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1005 20:21:55.947262  427001 kubeadm.go:322] 
	I1005 20:21:55.947441  427001 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1005 20:21:55.947455  427001 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1005 20:21:55.947489  427001 kubeadm.go:322] 
	I1005 20:21:55.947634  427001 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1005 20:21:55.947653  427001 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1005 20:21:55.947660  427001 kubeadm.go:322] 
	I1005 20:21:55.947729  427001 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1005 20:21:55.947742  427001 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1005 20:21:55.947822  427001 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 20:21:55.947832  427001 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 20:21:55.947899  427001 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 20:21:55.947909  427001 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 20:21:55.947914  427001 kubeadm.go:322] 
	I1005 20:21:55.947987  427001 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1005 20:21:55.948005  427001 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1005 20:21:55.948021  427001 kubeadm.go:322] 
	I1005 20:21:55.948088  427001 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1005 20:21:55.948107  427001 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1005 20:21:55.948125  427001 kubeadm.go:322] 
	I1005 20:21:55.948182  427001 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1005 20:21:55.948196  427001 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1005 20:21:55.948276  427001 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 20:21:55.948290  427001 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 20:21:55.948359  427001 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 20:21:55.948379  427001 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 20:21:55.948386  427001 kubeadm.go:322] 
	I1005 20:21:55.948494  427001 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1005 20:21:55.948506  427001 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1005 20:21:55.948609  427001 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1005 20:21:55.948650  427001 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1005 20:21:55.948680  427001 kubeadm.go:322] 
	I1005 20:21:55.948791  427001 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ams3ii.28hv5uhrc9igjme4 \
	I1005 20:21:55.948803  427001 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ams3ii.28hv5uhrc9igjme4 \
	I1005 20:21:55.948937  427001 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:af54c40b34df9aa62a3cf1403ac0941464ca2ce3fa61291d1928dbb7869129bb \
	I1005 20:21:55.948948  427001 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:af54c40b34df9aa62a3cf1403ac0941464ca2ce3fa61291d1928dbb7869129bb \
	I1005 20:21:55.948976  427001 command_runner.go:130] > 	--control-plane 
	I1005 20:21:55.948986  427001 kubeadm.go:322] 	--control-plane 
	I1005 20:21:55.948992  427001 kubeadm.go:322] 
	I1005 20:21:55.949105  427001 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1005 20:21:55.949116  427001 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1005 20:21:55.949122  427001 kubeadm.go:322] 
	I1005 20:21:55.949232  427001 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ams3ii.28hv5uhrc9igjme4 \
	I1005 20:21:55.949243  427001 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ams3ii.28hv5uhrc9igjme4 \
	I1005 20:21:55.949379  427001 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:af54c40b34df9aa62a3cf1403ac0941464ca2ce3fa61291d1928dbb7869129bb 
	I1005 20:21:55.949397  427001 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:af54c40b34df9aa62a3cf1403ac0941464ca2ce3fa61291d1928dbb7869129bb 
	I1005 20:21:55.951701  427001 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-gcp\n", err: exit status 1
	I1005 20:21:55.951730  427001 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-gcp\n", err: exit status 1
	I1005 20:21:55.951936  427001 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 20:21:55.951965  427001 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 20:21:55.951996  427001 cni.go:84] Creating CNI manager for ""
	I1005 20:21:55.952025  427001 cni.go:136] 1 nodes found, recommending kindnet
	I1005 20:21:55.953970  427001 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1005 20:21:55.955446  427001 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 20:21:55.960195  427001 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1005 20:21:55.960231  427001 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1005 20:21:55.960242  427001 command_runner.go:130] > Device: 35h/53d	Inode: 1303299     Links: 1
	I1005 20:21:55.960253  427001 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 20:21:55.960263  427001 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1005 20:21:55.960271  427001 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1005 20:21:55.960280  427001 command_runner.go:130] > Change: 2023-10-05 20:03:13.391912165 +0000
	I1005 20:21:55.960293  427001 command_runner.go:130] >  Birth: 2023-10-05 20:03:13.367909862 +0000
	I1005 20:21:55.960357  427001 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1005 20:21:55.960383  427001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 20:21:56.036214  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 20:21:56.726842  427001 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1005 20:21:56.732953  427001 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1005 20:21:56.741444  427001 command_runner.go:130] > serviceaccount/kindnet created
	I1005 20:21:56.754970  427001 command_runner.go:130] > daemonset.apps/kindnet created
	I1005 20:21:56.760134  427001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 20:21:56.760213  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:21:56.760235  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=multinode-401792 minikube.k8s.io/updated_at=2023_10_05T20_21_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:21:56.849600  427001 command_runner.go:130] > node/multinode-401792 labeled
	I1005 20:21:56.852700  427001 command_runner.go:130] > -16
	I1005 20:21:56.852726  427001 ops.go:34] apiserver oom_adj: -16
	I1005 20:21:56.852789  427001 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1005 20:21:56.852900  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:21:56.925934  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:21:56.929051  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:21:57.038689  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:21:57.539552  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:21:57.608987  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:21:58.039666  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:21:58.108824  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:21:58.539241  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:21:58.607762  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:21:59.039214  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:21:59.107401  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:21:59.539046  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:21:59.608232  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:00.039897  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:00.110417  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:00.538950  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:00.606523  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:01.039051  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:01.109659  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:01.538981  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:01.604852  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:02.039203  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:02.108353  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:02.539438  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:02.609727  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:03.039208  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:03.120124  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:03.539201  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:03.607040  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:04.039703  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:04.105442  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:04.539825  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:04.610929  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:05.039599  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:05.108311  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:05.538913  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:05.607379  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:06.038942  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:06.109293  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:06.539911  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:06.610188  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:07.038973  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:07.104297  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:07.539807  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:07.615147  427001 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 20:22:08.039578  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:22:08.107000  427001 command_runner.go:130] > NAME      SECRETS   AGE
	I1005 20:22:08.107022  427001 command_runner.go:130] > default   0         1s
	I1005 20:22:08.109734  427001 kubeadm.go:1081] duration metric: took 11.349591404s to wait for elevateKubeSystemPrivileges.
	I1005 20:22:08.109780  427001 kubeadm.go:406] StartCluster complete in 22.101195536s
	I1005 20:22:08.109806  427001 settings.go:142] acquiring lock: {Name:mk6ed3422387c6b56e20ba6eb900649f1c8038d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:22:08.109900  427001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:22:08.110903  427001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-334135/kubeconfig: {Name:mk99d37d95bb8af0e1f4fc14f039efe68f627fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:22:08.111241  427001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 20:22:08.111342  427001 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 20:22:08.111421  427001 addons.go:69] Setting storage-provisioner=true in profile "multinode-401792"
	I1005 20:22:08.111428  427001 addons.go:69] Setting default-storageclass=true in profile "multinode-401792"
	I1005 20:22:08.111446  427001 addons.go:231] Setting addon storage-provisioner=true in "multinode-401792"
	I1005 20:22:08.111449  427001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-401792"
	I1005 20:22:08.111457  427001 config.go:182] Loaded profile config "multinode-401792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:22:08.111520  427001 host.go:66] Checking if "multinode-401792" exists ...
	I1005 20:22:08.111640  427001 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:22:08.111897  427001 cli_runner.go:164] Run: docker container inspect multinode-401792 --format={{.State.Status}}
	I1005 20:22:08.112010  427001 cli_runner.go:164] Run: docker container inspect multinode-401792 --format={{.State.Status}}
	I1005 20:22:08.111992  427001 kapi.go:59] client config for multinode-401792: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.key", CAFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bfbf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 20:22:08.112810  427001 cert_rotation.go:137] Starting client certificate rotation controller
	I1005 20:22:08.113099  427001 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1005 20:22:08.113113  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:08.113123  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:08.113129  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:08.123878  427001 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1005 20:22:08.123914  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:08.123926  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:08.123936  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:08.123945  427001 round_trippers.go:580]     Content-Length: 291
	I1005 20:22:08.123953  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:08 GMT
	I1005 20:22:08.123961  427001 round_trippers.go:580]     Audit-Id: 8f70ff22-8eb0-4794-a8fb-a8538aa66f5c
	I1005 20:22:08.123969  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:08.123984  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:08.124022  427001 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4c04a5db-f258-4e51-aa96-1b09daef1dd4","resourceVersion":"270","creationTimestamp":"2023-10-05T20:21:55Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1005 20:22:08.124825  427001 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4c04a5db-f258-4e51-aa96-1b09daef1dd4","resourceVersion":"270","creationTimestamp":"2023-10-05T20:21:55Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1005 20:22:08.124925  427001 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1005 20:22:08.124941  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:08.124951  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:08.124960  427001 round_trippers.go:473]     Content-Type: application/json
	I1005 20:22:08.124968  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:08.132921  427001 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:22:08.133139  427001 kapi.go:59] client config for multinode-401792: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.key", CAFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bfbf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 20:22:08.133382  427001 addons.go:231] Setting addon default-storageclass=true in "multinode-401792"
	I1005 20:22:08.133421  427001 host.go:66] Checking if "multinode-401792" exists ...
	I1005 20:22:08.133781  427001 cli_runner.go:164] Run: docker container inspect multinode-401792 --format={{.State.Status}}
	I1005 20:22:08.134961  427001 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1005 20:22:08.134988  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:08.134997  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:08 GMT
	I1005 20:22:08.135002  427001 round_trippers.go:580]     Audit-Id: 9880e7cc-4125-48da-b53f-a517fb598aa7
	I1005 20:22:08.135008  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:08.135013  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:08.135018  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:08.135023  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:08.135030  427001 round_trippers.go:580]     Content-Length: 291
	I1005 20:22:08.135087  427001 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4c04a5db-f258-4e51-aa96-1b09daef1dd4","resourceVersion":"337","creationTimestamp":"2023-10-05T20:21:55Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1005 20:22:08.135302  427001 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1005 20:22:08.135325  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:08.135336  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:08.135346  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:08.137933  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:08.137960  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:08.137982  427001 round_trippers.go:580]     Audit-Id: 18f6617e-9082-4c05-b699-ed13900a4fa4
	I1005 20:22:08.137992  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:08.138001  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:08.138010  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:08.138019  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:08.138030  427001 round_trippers.go:580]     Content-Length: 291
	I1005 20:22:08.138038  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:08 GMT
	I1005 20:22:08.138069  427001 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4c04a5db-f258-4e51-aa96-1b09daef1dd4","resourceVersion":"337","creationTimestamp":"2023-10-05T20:21:55Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1005 20:22:08.138191  427001 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-401792" context rescaled to 1 replicas
	I1005 20:22:08.138225  427001 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 20:22:08.140994  427001 out.go:177] * Verifying Kubernetes components...
	I1005 20:22:08.142393  427001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:22:08.144138  427001 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:22:08.145601  427001 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:22:08.145630  427001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 20:22:08.145706  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:22:08.155685  427001 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 20:22:08.155717  427001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 20:22:08.155780  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:22:08.166166  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa Username:docker}
	I1005 20:22:08.177460  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa Username:docker}
	I1005 20:22:08.198258  427001 command_runner.go:130] > apiVersion: v1
	I1005 20:22:08.198287  427001 command_runner.go:130] > data:
	I1005 20:22:08.198295  427001 command_runner.go:130] >   Corefile: |
	I1005 20:22:08.198301  427001 command_runner.go:130] >     .:53 {
	I1005 20:22:08.198307  427001 command_runner.go:130] >         errors
	I1005 20:22:08.198315  427001 command_runner.go:130] >         health {
	I1005 20:22:08.198323  427001 command_runner.go:130] >            lameduck 5s
	I1005 20:22:08.198329  427001 command_runner.go:130] >         }
	I1005 20:22:08.198335  427001 command_runner.go:130] >         ready
	I1005 20:22:08.198345  427001 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1005 20:22:08.198353  427001 command_runner.go:130] >            pods insecure
	I1005 20:22:08.198362  427001 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1005 20:22:08.198370  427001 command_runner.go:130] >            ttl 30
	I1005 20:22:08.198376  427001 command_runner.go:130] >         }
	I1005 20:22:08.198399  427001 command_runner.go:130] >         prometheus :9153
	I1005 20:22:08.198407  427001 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1005 20:22:08.198415  427001 command_runner.go:130] >            max_concurrent 1000
	I1005 20:22:08.198424  427001 command_runner.go:130] >         }
	I1005 20:22:08.198431  427001 command_runner.go:130] >         cache 30
	I1005 20:22:08.198438  427001 command_runner.go:130] >         loop
	I1005 20:22:08.198444  427001 command_runner.go:130] >         reload
	I1005 20:22:08.198451  427001 command_runner.go:130] >         loadbalance
	I1005 20:22:08.198457  427001 command_runner.go:130] >     }
	I1005 20:22:08.198464  427001 command_runner.go:130] > kind: ConfigMap
	I1005 20:22:08.198470  427001 command_runner.go:130] > metadata:
	I1005 20:22:08.198480  427001 command_runner.go:130] >   creationTimestamp: "2023-10-05T20:21:55Z"
	I1005 20:22:08.198486  427001 command_runner.go:130] >   name: coredns
	I1005 20:22:08.198493  427001 command_runner.go:130] >   namespace: kube-system
	I1005 20:22:08.198500  427001 command_runner.go:130] >   resourceVersion: "266"
	I1005 20:22:08.198508  427001 command_runner.go:130] >   uid: f66604b9-e8e8-4749-9223-d96369a2dcd7
	I1005 20:22:08.201998  427001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1005 20:22:08.202387  427001 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:22:08.202763  427001 kapi.go:59] client config for multinode-401792: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.key", CAFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bfbf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 20:22:08.203158  427001 node_ready.go:35] waiting up to 6m0s for node "multinode-401792" to be "Ready" ...
	I1005 20:22:08.203262  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:08.203277  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:08.203289  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:08.203302  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:08.206011  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:08.206040  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:08.206050  427001 round_trippers.go:580]     Audit-Id: 7a5eaf51-ffa5-4f57-92b2-b390407d92ca
	I1005 20:22:08.206060  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:08.206068  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:08.206078  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:08.206086  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:08.206095  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:08 GMT
	I1005 20:22:08.206230  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"329","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:2
1:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6037 chars]
	I1005 20:22:08.207035  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:08.207056  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:08.207085  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:08.207095  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:08.220129  427001 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1005 20:22:08.220165  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:08.220180  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:08.220190  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:08 GMT
	I1005 20:22:08.220200  427001 round_trippers.go:580]     Audit-Id: a099b529-ba69-46ad-b49c-369f701e7c47
	I1005 20:22:08.220208  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:08.220217  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:08.220226  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:08.220432  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"329","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:2
1:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6037 chars]
	I1005 20:22:08.340890  427001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:22:08.344869  427001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 20:22:08.721344  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:08.721370  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:08.721382  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:08.721392  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:08.734939  427001 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1005 20:22:08.734970  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:08.734981  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:08.734989  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:08.734996  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:08.735004  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:08.735011  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:08 GMT
	I1005 20:22:08.735019  427001 round_trippers.go:580]     Audit-Id: 801bd0e2-5e73-4cfb-8971-6fde0399e664
	I1005 20:22:08.735184  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:08.840508  427001 command_runner.go:130] > configmap/coredns replaced
	I1005 20:22:08.921324  427001 start.go:923] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1005 20:22:09.221936  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:09.221967  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:09.221979  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:09.221989  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:09.224612  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:09.224644  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:09.224656  427001 round_trippers.go:580]     Audit-Id: c426b2f0-adf7-47c8-8efc-004112122470
	I1005 20:22:09.224665  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:09.224674  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:09.224682  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:09.224692  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:09.224704  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:09 GMT
	I1005 20:22:09.224836  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:09.236393  427001 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1005 20:22:09.242725  427001 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1005 20:22:09.251725  427001 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1005 20:22:09.264370  427001 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1005 20:22:09.272562  427001 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1005 20:22:09.286984  427001 command_runner.go:130] > pod/storage-provisioner created
	I1005 20:22:09.288064  427001 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1005 20:22:09.288213  427001 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1005 20:22:09.288226  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:09.288238  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:09.288248  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:09.290920  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:09.290957  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:09.290967  427001 round_trippers.go:580]     Content-Length: 1273
	I1005 20:22:09.290976  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:09 GMT
	I1005 20:22:09.290985  427001 round_trippers.go:580]     Audit-Id: 24e7634e-891e-4953-a77c-51e621846d7f
	I1005 20:22:09.290993  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:09.291007  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:09.291020  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:09.291034  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:09.291100  427001 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"65116931-afc0-4836-bfa8-edaa0ebad209","resourceVersion":"390","creationTimestamp":"2023-10-05T20:22:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-05T20:22:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1005 20:22:09.291498  427001 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"65116931-afc0-4836-bfa8-edaa0ebad209","resourceVersion":"390","creationTimestamp":"2023-10-05T20:22:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-05T20:22:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1005 20:22:09.291565  427001 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1005 20:22:09.291576  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:09.291588  427001 round_trippers.go:473]     Content-Type: application/json
	I1005 20:22:09.291600  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:09.291614  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:09.294565  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:09.294590  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:09.294600  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:09.294608  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:09.294616  427001 round_trippers.go:580]     Content-Length: 1220
	I1005 20:22:09.294623  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:09 GMT
	I1005 20:22:09.294632  427001 round_trippers.go:580]     Audit-Id: ab579082-1342-4fb4-a6e2-690119679e67
	I1005 20:22:09.294645  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:09.294668  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:09.294712  427001 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"65116931-afc0-4836-bfa8-edaa0ebad209","resourceVersion":"390","creationTimestamp":"2023-10-05T20:22:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-05T20:22:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1005 20:22:09.296430  427001 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1005 20:22:09.297540  427001 addons.go:502] enable addons completed in 1.186204128s: enabled=[storage-provisioner default-storageclass]
	I1005 20:22:09.721215  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:09.721242  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:09.721255  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:09.721264  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:09.724384  427001 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 20:22:09.724412  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:09.724422  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:09.724430  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:09.724438  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:09 GMT
	I1005 20:22:09.724444  427001 round_trippers.go:580]     Audit-Id: 7b410869-51b8-490f-b683-dc9b516d91ee
	I1005 20:22:09.724451  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:09.724458  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:09.724659  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:10.221311  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:10.221345  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:10.221360  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:10.221371  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:10.223946  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:10.223974  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:10.223985  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:10.223993  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:10.224002  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:10.224009  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:10 GMT
	I1005 20:22:10.224017  427001 round_trippers.go:580]     Audit-Id: c944836e-7735-4e9e-bb48-94c938473f96
	I1005 20:22:10.224024  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:10.224131  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:10.224497  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:10.721828  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:10.721851  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:10.721860  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:10.721866  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:10.724379  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:10.724408  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:10.724418  427001 round_trippers.go:580]     Audit-Id: 58e49f65-2048-4b31-ab1f-0361ed4df051
	I1005 20:22:10.724427  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:10.724435  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:10.724443  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:10.724456  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:10.724466  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:10 GMT
	I1005 20:22:10.724595  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:11.221072  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:11.221098  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:11.221107  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:11.221113  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:11.223728  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:11.223751  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:11.223758  427001 round_trippers.go:580]     Audit-Id: 28d07373-70bc-4661-933d-839aeb9a2aea
	I1005 20:22:11.223764  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:11.223770  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:11.223775  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:11.223780  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:11.223788  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:11 GMT
	I1005 20:22:11.223884  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:11.721529  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:11.721565  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:11.721576  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:11.721582  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:11.724182  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:11.724211  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:11.724220  427001 round_trippers.go:580]     Audit-Id: cff1ac3f-af53-4e7e-98f2-3951ad3e6094
	I1005 20:22:11.724226  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:11.724232  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:11.724237  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:11.724242  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:11.724248  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:11 GMT
	I1005 20:22:11.724458  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:12.221937  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:12.221963  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:12.221971  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:12.221977  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:12.224680  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:12.224714  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:12.224726  427001 round_trippers.go:580]     Audit-Id: c7f84452-09e3-4788-ac73-0e01190071a2
	I1005 20:22:12.224734  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:12.224740  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:12.224745  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:12.224752  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:12.224761  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:12 GMT
	I1005 20:22:12.224894  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:12.225227  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:12.721888  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:12.721916  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:12.721929  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:12.721935  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:12.724511  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:12.724543  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:12.724554  427001 round_trippers.go:580]     Audit-Id: 03dc5b05-5425-4bb9-8663-07d410a574cb
	I1005 20:22:12.724562  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:12.724569  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:12.724577  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:12.724585  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:12.724593  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:12 GMT
	I1005 20:22:12.724762  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:13.221353  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:13.221381  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:13.221390  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:13.221396  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:13.223910  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:13.223942  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:13.223955  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:13.223965  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:13.223975  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:13.223984  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:13 GMT
	I1005 20:22:13.223994  427001 round_trippers.go:580]     Audit-Id: 85008a00-e9a2-42de-9c82-984f437fa34b
	I1005 20:22:13.224008  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:13.224144  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:13.721789  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:13.721816  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:13.721824  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:13.721830  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:13.724490  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:13.724520  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:13.724530  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:13.724538  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:13 GMT
	I1005 20:22:13.724545  427001 round_trippers.go:580]     Audit-Id: 5929b582-f0eb-4104-a680-47ac271aae96
	I1005 20:22:13.724554  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:13.724563  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:13.724571  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:13.724732  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:14.221194  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:14.221220  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:14.221228  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:14.221235  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:14.223675  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:14.223708  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:14.223719  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:14.223727  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:14.223735  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:14 GMT
	I1005 20:22:14.223743  427001 round_trippers.go:580]     Audit-Id: 2c16ddf6-1f50-46b7-beac-e8160d2e5bcb
	I1005 20:22:14.223752  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:14.223761  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:14.223885  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:14.721552  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:14.721586  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:14.721597  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:14.721605  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:14.724260  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:14.724292  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:14.724304  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:14.724312  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:14 GMT
	I1005 20:22:14.724319  427001 round_trippers.go:580]     Audit-Id: 59d664b0-7701-4bca-ae5b-5488865992f5
	I1005 20:22:14.724329  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:14.724337  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:14.724346  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:14.724482  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:14.724835  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:15.222103  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:15.222129  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:15.222138  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:15.222144  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:15.224530  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:15.224559  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:15.224570  427001 round_trippers.go:580]     Audit-Id: e3ec8f13-03d8-4d76-821c-fdfb486aea39
	I1005 20:22:15.224578  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:15.224585  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:15.224593  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:15.224601  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:15.224613  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:15 GMT
	I1005 20:22:15.224774  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:15.721399  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:15.721428  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:15.721437  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:15.721443  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:15.723802  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:15.723835  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:15.723846  427001 round_trippers.go:580]     Audit-Id: 93043169-ae3f-45fe-b810-93fb8c891005
	I1005 20:22:15.723854  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:15.723862  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:15.723871  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:15.723879  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:15.723886  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:15 GMT
	I1005 20:22:15.724010  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:16.221586  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:16.221612  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:16.221621  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:16.221628  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:16.224070  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:16.224097  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:16.224107  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:16.224115  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:16 GMT
	I1005 20:22:16.224123  427001 round_trippers.go:580]     Audit-Id: e92ba14a-419f-45ef-98b1-02aaa475c293
	I1005 20:22:16.224130  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:16.224139  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:16.224151  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:16.224306  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:16.721785  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:16.721819  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:16.721832  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:16.721842  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:16.724190  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:16.724215  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:16.724223  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:16.724230  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:16.724238  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:16.724247  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:16.724256  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:16 GMT
	I1005 20:22:16.724265  427001 round_trippers.go:580]     Audit-Id: e697ee9d-1c59-44ce-847e-0b77435e9a31
	I1005 20:22:16.724435  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:17.222097  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:17.222122  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:17.222131  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:17.222137  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:17.224677  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:17.224709  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:17.224721  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:17.224729  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:17.224736  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:17.224744  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:17 GMT
	I1005 20:22:17.224754  427001 round_trippers.go:580]     Audit-Id: f73ba68d-307b-4a41-a905-61f9745ba5e2
	I1005 20:22:17.224766  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:17.224901  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:17.225235  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:17.721537  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:17.721564  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:17.721573  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:17.721580  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:17.724101  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:17.724134  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:17.724145  427001 round_trippers.go:580]     Audit-Id: 03f0c7f5-d7bc-4e5d-b607-efe76f6d474d
	I1005 20:22:17.724166  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:17.724174  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:17.724183  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:17.724191  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:17.724199  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:17 GMT
	I1005 20:22:17.724359  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:18.221098  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:18.221124  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:18.221133  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:18.221139  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:18.223609  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:18.223637  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:18.223645  427001 round_trippers.go:580]     Audit-Id: a61c6228-98ec-4534-9da2-ec3e68cd5f74
	I1005 20:22:18.223650  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:18.223655  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:18.223660  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:18.223668  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:18.223677  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:18 GMT
	I1005 20:22:18.223838  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:18.721109  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:18.721137  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:18.721146  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:18.721155  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:18.723813  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:18.723840  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:18.723848  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:18.723854  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:18.723860  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:18.723865  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:18 GMT
	I1005 20:22:18.723870  427001 round_trippers.go:580]     Audit-Id: 9de7be9b-5f5b-4ed6-9b0f-08c3855f55b4
	I1005 20:22:18.723879  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:18.724051  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:19.221709  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:19.221737  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:19.221746  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:19.221752  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:19.224185  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:19.224208  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:19.224215  427001 round_trippers.go:580]     Audit-Id: 8d79637a-1cd2-4b3a-834d-455aaf73a861
	I1005 20:22:19.224221  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:19.224226  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:19.224231  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:19.224236  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:19.224241  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:19 GMT
	I1005 20:22:19.224354  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:19.722108  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:19.722136  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:19.722146  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:19.722151  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:19.724533  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:19.724560  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:19.724568  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:19.724573  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:19.724578  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:19 GMT
	I1005 20:22:19.724583  427001 round_trippers.go:580]     Audit-Id: 3e9fcbdf-8d47-4e17-b26e-b8fc32a42788
	I1005 20:22:19.724588  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:19.724593  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:19.724724  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:19.725058  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:20.221326  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:20.221351  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:20.221360  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:20.221366  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:20.224069  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:20.224095  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:20.224103  427001 round_trippers.go:580]     Audit-Id: a0317273-ca6c-4e97-a74c-eeba542ae6e7
	I1005 20:22:20.224112  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:20.224121  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:20.224130  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:20.224139  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:20.224148  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:20 GMT
	I1005 20:22:20.224272  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:20.721986  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:20.722015  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:20.722024  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:20.722031  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:20.724768  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:20.724803  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:20.724814  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:20.724826  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:20.724834  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:20.724842  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:20 GMT
	I1005 20:22:20.724850  427001 round_trippers.go:580]     Audit-Id: 4f0e6f7e-efcd-45fd-9d2c-43b57c976ce7
	I1005 20:22:20.724857  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:20.724967  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:21.221577  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:21.221605  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:21.221614  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:21.221620  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:21.224304  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:21.224337  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:21.224349  427001 round_trippers.go:580]     Audit-Id: 47ff8d22-b2dc-41ec-b394-e3b59e16c844
	I1005 20:22:21.224358  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:21.224364  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:21.224369  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:21.224375  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:21.224380  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:21 GMT
	I1005 20:22:21.224492  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:21.721111  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:21.721146  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:21.721155  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:21.721161  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:21.723961  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:21.723990  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:21.724002  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:21.724012  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:21.724020  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:21.724029  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:21.724038  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:21 GMT
	I1005 20:22:21.724047  427001 round_trippers.go:580]     Audit-Id: b361e146-186c-481e-acf3-452219fa2c10
	I1005 20:22:21.724206  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:22.221868  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:22.221900  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:22.221914  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:22.221924  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:22.224360  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:22.224391  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:22.224402  427001 round_trippers.go:580]     Audit-Id: 5950f40e-a946-4bab-bc52-b6ca42ed33bc
	I1005 20:22:22.224410  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:22.224419  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:22.224427  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:22.224436  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:22.224448  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:22 GMT
	I1005 20:22:22.224627  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:22.224957  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:22.721747  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:22.721792  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:22.721801  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:22.721828  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:22.724480  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:22.724504  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:22.724511  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:22.724517  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:22.724522  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:22 GMT
	I1005 20:22:22.724527  427001 round_trippers.go:580]     Audit-Id: 3cfce12a-165f-4a08-b012-d782e6001974
	I1005 20:22:22.724532  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:22.724536  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:22.724686  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:23.221199  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:23.221229  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:23.221238  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:23.221244  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:23.223994  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:23.224022  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:23.224032  427001 round_trippers.go:580]     Audit-Id: 3553547b-0586-4da8-b737-824a2cba7126
	I1005 20:22:23.224040  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:23.224048  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:23.224054  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:23.224062  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:23.224069  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:23 GMT
	I1005 20:22:23.224200  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:23.721902  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:23.721926  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:23.721934  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:23.721940  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:23.724521  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:23.724546  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:23.724554  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:23.724559  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:23 GMT
	I1005 20:22:23.724564  427001 round_trippers.go:580]     Audit-Id: 06496cf0-e02c-4ac3-9b8f-97532c74be2f
	I1005 20:22:23.724569  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:23.724574  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:23.724581  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:23.724713  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:24.221294  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:24.221319  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:24.221328  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:24.221334  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:24.223871  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:24.223896  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:24.223903  427001 round_trippers.go:580]     Audit-Id: dd8e0b7f-9463-4b55-a592-3029e769f193
	I1005 20:22:24.223909  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:24.223914  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:24.223919  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:24.223924  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:24.223929  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:24 GMT
	I1005 20:22:24.224040  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:24.721825  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:24.721859  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:24.721870  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:24.721885  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:24.724396  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:24.724426  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:24.724436  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:24 GMT
	I1005 20:22:24.724445  427001 round_trippers.go:580]     Audit-Id: 3edc818f-0d13-4ba5-b8d5-8c6a83bba33c
	I1005 20:22:24.724452  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:24.724460  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:24.724468  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:24.724479  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:24.724622  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:24.725002  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:25.221359  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:25.221387  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:25.221396  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:25.221402  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:25.223924  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:25.223956  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:25.223967  427001 round_trippers.go:580]     Audit-Id: db01cc37-da9d-408f-bfb1-5801a3e8c01c
	I1005 20:22:25.223975  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:25.223988  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:25.223996  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:25.224004  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:25.224011  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:25 GMT
	I1005 20:22:25.224133  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:25.721842  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:25.721869  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:25.721878  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:25.721901  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:25.724309  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:25.724338  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:25.724349  427001 round_trippers.go:580]     Audit-Id: c05b9a18-adf5-41d3-88b5-e4837e81a93d
	I1005 20:22:25.724357  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:25.724365  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:25.724372  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:25.724380  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:25.724388  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:25 GMT
	I1005 20:22:25.724568  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:26.222116  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:26.222152  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:26.222165  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:26.222175  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:26.224865  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:26.224898  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:26.224918  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:26.224924  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:26.224929  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:26.224935  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:26 GMT
	I1005 20:22:26.224942  427001 round_trippers.go:580]     Audit-Id: aefa937c-310f-4def-8a99-b882c582053c
	I1005 20:22:26.224947  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:26.225047  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:26.721756  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:26.721782  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:26.721791  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:26.721798  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:26.724355  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:26.724391  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:26.724403  427001 round_trippers.go:580]     Audit-Id: de1ae109-3d25-4dbd-893c-0350e0493db3
	I1005 20:22:26.724410  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:26.724416  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:26.724421  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:26.724426  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:26.724431  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:26 GMT
	I1005 20:22:26.724547  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:27.221093  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:27.221126  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:27.221136  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:27.221142  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:27.223774  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:27.223797  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:27.223805  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:27.223810  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:27.223815  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:27.223820  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:27.223825  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:27 GMT
	I1005 20:22:27.223830  427001 round_trippers.go:580]     Audit-Id: 39487315-8ffa-4a84-9ff1-847dc66adae5
	I1005 20:22:27.223946  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:27.224346  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:27.721869  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:27.721895  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:27.721904  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:27.721910  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:27.724579  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:27.724610  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:27.724620  427001 round_trippers.go:580]     Audit-Id: 0b590845-b42a-4c5b-97c8-e7ddc9b7104c
	I1005 20:22:27.724634  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:27.724641  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:27.724649  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:27.724657  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:27.724665  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:27 GMT
	I1005 20:22:27.724823  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:28.221791  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:28.221822  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:28.221831  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:28.221837  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:28.224354  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:28.224382  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:28.224396  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:28.224411  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:28.224421  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:28.224428  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:28 GMT
	I1005 20:22:28.224439  427001 round_trippers.go:580]     Audit-Id: f924c2e7-3693-454e-8dbb-6099a20542b3
	I1005 20:22:28.224450  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:28.224653  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:28.721199  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:28.721227  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:28.721235  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:28.721241  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:28.723747  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:28.723775  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:28.723783  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:28.723788  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:28.723794  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:28.723799  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:28 GMT
	I1005 20:22:28.723805  427001 round_trippers.go:580]     Audit-Id: 8b478fad-2d8e-4faa-8b6c-73c92f12d3d3
	I1005 20:22:28.723810  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:28.723949  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:29.221544  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:29.221574  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:29.221583  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:29.221589  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:29.224145  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:29.224174  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:29.224184  427001 round_trippers.go:580]     Audit-Id: 099e2c3b-561b-4958-9b86-a8d9d3090b93
	I1005 20:22:29.224193  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:29.224201  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:29.224209  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:29.224216  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:29.224224  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:29 GMT
	I1005 20:22:29.224342  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:29.224658  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:29.722025  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:29.722050  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:29.722058  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:29.722065  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:29.724455  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:29.724478  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:29.724485  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:29 GMT
	I1005 20:22:29.724491  427001 round_trippers.go:580]     Audit-Id: 8bdf5987-3e98-412f-b937-498c697620b0
	I1005 20:22:29.724496  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:29.724501  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:29.724506  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:29.724514  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:29.724733  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:30.221365  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:30.221394  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:30.221403  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:30.221409  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:30.224056  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:30.224079  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:30.224086  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:30.224095  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:30 GMT
	I1005 20:22:30.224107  427001 round_trippers.go:580]     Audit-Id: cf00ad6b-5f59-4c8c-b89e-999fa99ee5a2
	I1005 20:22:30.224114  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:30.224121  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:30.224129  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:30.224253  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:30.721949  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:30.721977  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:30.721985  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:30.721991  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:30.724469  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:30.724495  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:30.724505  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:30 GMT
	I1005 20:22:30.724512  427001 round_trippers.go:580]     Audit-Id: 847634ce-da91-44e0-83ff-8d7bcff0dd13
	I1005 20:22:30.724521  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:30.724529  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:30.724536  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:30.724543  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:30.724672  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:31.221249  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:31.221278  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:31.221286  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:31.221292  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:31.223851  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:31.223877  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:31.223887  427001 round_trippers.go:580]     Audit-Id: 1eeba306-f70f-434b-a7c1-b5f9c15bb2e3
	I1005 20:22:31.223895  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:31.223904  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:31.223913  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:31.223922  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:31.223933  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:31 GMT
	I1005 20:22:31.224036  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:31.721686  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:31.721713  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:31.721721  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:31.721727  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:31.724231  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:31.724255  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:31.724262  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:31.724268  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:31.724273  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:31 GMT
	I1005 20:22:31.724278  427001 round_trippers.go:580]     Audit-Id: 42353956-41c8-4d74-bd22-dfe6314f53b8
	I1005 20:22:31.724283  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:31.724288  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:31.724478  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:31.724819  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:32.221816  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:32.221841  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:32.221850  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:32.221856  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:32.224287  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:32.224309  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:32.224316  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:32.224322  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:32.224327  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:32.224332  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:32.224337  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:32 GMT
	I1005 20:22:32.224342  427001 round_trippers.go:580]     Audit-Id: bc547b1b-46c1-47e9-91b8-cff283528897
	I1005 20:22:32.224491  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:32.721721  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:32.721746  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:32.721755  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:32.721761  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:32.724358  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:32.724381  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:32.724388  427001 round_trippers.go:580]     Audit-Id: e6b06293-3841-414d-97eb-284e49dfb085
	I1005 20:22:32.724394  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:32.724399  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:32.724404  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:32.724422  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:32.724427  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:32 GMT
	I1005 20:22:32.724642  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:33.221261  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:33.221285  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:33.221304  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:33.221310  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:33.223731  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:33.223757  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:33.223768  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:33.223777  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:33.223785  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:33.223794  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:33.223802  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:33 GMT
	I1005 20:22:33.223808  427001 round_trippers.go:580]     Audit-Id: 6a5e1816-182c-4603-afef-173ce5633043
	I1005 20:22:33.223937  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:33.721183  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:33.721215  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:33.721229  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:33.721238  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:33.723782  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:33.723807  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:33.723816  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:33.723822  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:33 GMT
	I1005 20:22:33.723830  427001 round_trippers.go:580]     Audit-Id: ba80af46-97a9-4cc0-840e-79d446bf993a
	I1005 20:22:33.723838  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:33.723846  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:33.723853  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:33.723998  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:34.221553  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:34.221584  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:34.221599  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:34.221607  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:34.224127  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:34.224151  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:34.224158  427001 round_trippers.go:580]     Audit-Id: 91e4f204-1021-4360-917d-e496d2d55996
	I1005 20:22:34.224163  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:34.224171  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:34.224179  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:34.224189  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:34.224197  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:34 GMT
	I1005 20:22:34.224297  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:34.224626  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:34.722004  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:34.722030  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:34.722039  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:34.722045  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:34.724495  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:34.724517  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:34.724525  427001 round_trippers.go:580]     Audit-Id: d8b9f0a1-d2f3-40e4-a443-828bc62b1f30
	I1005 20:22:34.724531  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:34.724536  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:34.724541  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:34.724546  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:34.724551  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:34 GMT
	I1005 20:22:34.724720  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:35.221253  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:35.221281  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:35.221289  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:35.221295  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:35.223664  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:35.223687  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:35.223694  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:35 GMT
	I1005 20:22:35.223707  427001 round_trippers.go:580]     Audit-Id: 2b141278-ddbe-4db2-8c71-8da3e0ce9283
	I1005 20:22:35.223715  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:35.223733  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:35.223741  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:35.223752  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:35.223885  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:35.721489  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:35.721518  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:35.721526  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:35.721534  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:35.723877  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:35.723908  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:35.723916  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:35.723923  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:35.723929  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:35 GMT
	I1005 20:22:35.723937  427001 round_trippers.go:580]     Audit-Id: 4157a5c9-37ed-4530-9690-4ea60d921061
	I1005 20:22:35.723946  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:35.723955  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:35.724115  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:36.221813  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:36.221846  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:36.221858  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:36.221876  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:36.224381  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:36.224403  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:36.224411  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:36.224417  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:36 GMT
	I1005 20:22:36.224422  427001 round_trippers.go:580]     Audit-Id: c2ecd75e-4507-4314-89e0-d22a5b29dc85
	I1005 20:22:36.224427  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:36.224431  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:36.224437  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:36.224555  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:36.224889  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:36.721157  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:36.721183  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:36.721192  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:36.721198  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:36.723685  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:36.723713  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:36.723723  427001 round_trippers.go:580]     Audit-Id: b4321e10-1ebf-4138-864b-e0dd9d81be3e
	I1005 20:22:36.723732  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:36.723740  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:36.723747  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:36.723758  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:36.723767  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:36 GMT
	I1005 20:22:36.723906  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:37.221516  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:37.221550  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:37.221563  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:37.221573  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:37.224053  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:37.224075  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:37.224083  427001 round_trippers.go:580]     Audit-Id: 979677f8-4775-4e21-8154-8eae0b9e1620
	I1005 20:22:37.224089  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:37.224097  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:37.224105  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:37.224113  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:37.224120  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:37 GMT
	I1005 20:22:37.224229  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:37.722148  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:37.722177  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:37.722188  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:37.722196  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:37.724634  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:37.724656  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:37.724664  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:37.724670  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:37.724678  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:37.724685  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:37 GMT
	I1005 20:22:37.724694  427001 round_trippers.go:580]     Audit-Id: b636b73e-41c4-4f51-8ef8-b7f2b00149ae
	I1005 20:22:37.724701  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:37.724889  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:38.221770  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:38.221800  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:38.221815  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:38.221823  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:38.224354  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:38.224376  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:38.224384  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:38 GMT
	I1005 20:22:38.224390  427001 round_trippers.go:580]     Audit-Id: 69dbd6ee-8602-4b6e-91b1-f8a368f9e580
	I1005 20:22:38.224404  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:38.224412  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:38.224420  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:38.224427  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:38.224551  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:38.721151  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:38.721178  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:38.721209  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:38.721216  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:38.723761  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:38.723792  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:38.723803  427001 round_trippers.go:580]     Audit-Id: 69e687b4-c1b0-405e-aae9-37f667b9f618
	I1005 20:22:38.723811  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:38.723819  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:38.723828  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:38.723837  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:38.723848  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:38 GMT
	I1005 20:22:38.723983  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:38.724365  427001 node_ready.go:58] node "multinode-401792" has status "Ready":"False"
	I1005 20:22:39.221552  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:39.221577  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:39.221585  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:39.221592  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:39.224136  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:39.224519  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:39.224546  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:39.224557  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:39 GMT
	I1005 20:22:39.224566  427001 round_trippers.go:580]     Audit-Id: ca556a05-4194-4dd7-9208-038a15ebb0e1
	I1005 20:22:39.224576  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:39.224586  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:39.224596  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:39.224745  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:39.721140  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:39.721167  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:39.721176  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:39.721184  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:39.723615  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:39.723638  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:39.723646  427001 round_trippers.go:580]     Audit-Id: 3392f3fe-23b3-4797-a8eb-1d72b8bba953
	I1005 20:22:39.723652  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:39.723657  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:39.723662  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:39.723669  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:39.723677  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:39 GMT
	I1005 20:22:39.723828  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"350","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1005 20:22:40.221416  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:40.221448  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:40.221461  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:40.221472  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:40.224706  427001 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 20:22:40.224738  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:40.224749  427001 round_trippers.go:580]     Audit-Id: c77c0508-dbf2-471b-a012-28a7d9e33fc0
	I1005 20:22:40.224758  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:40.224766  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:40.224775  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:40.224783  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:40.224790  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:40 GMT
	I1005 20:22:40.225025  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:22:40.225523  427001 node_ready.go:49] node "multinode-401792" has status "Ready":"True"
	I1005 20:22:40.225552  427001 node_ready.go:38] duration metric: took 32.022365185s waiting for node "multinode-401792" to be "Ready" ...
	I1005 20:22:40.225570  427001 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:22:40.225679  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 20:22:40.225694  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:40.225705  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:40.225713  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:40.230395  427001 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1005 20:22:40.230437  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:40.230448  427001 round_trippers.go:580]     Audit-Id: afe1ad64-b249-40a2-9842-3cfb40352f0b
	I1005 20:22:40.230456  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:40.230464  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:40.230471  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:40.230479  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:40.230487  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:40 GMT
	I1005 20:22:40.231611  427001 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nctb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6db951fc-21de-44c9-9e94-cfe1ab7ac040","resourceVersion":"429","creationTimestamp":"2023-10-05T20:22:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"63fb560e-8695-4b28-89cf-d0b3759b9e96","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"63fb560e-8695-4b28-89cf-d0b3759b9e96\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1005 20:22:40.235541  427001 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nctb6" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:40.235666  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nctb6
	I1005 20:22:40.235680  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:40.235692  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:40.235703  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:40.238128  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:40.238157  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:40.238168  427001 round_trippers.go:580]     Audit-Id: d057194d-642f-4afa-a3d2-bbb8628a911d
	I1005 20:22:40.238175  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:40.238183  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:40.238190  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:40.238198  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:40.238205  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:40 GMT
	I1005 20:22:40.238356  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nctb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6db951fc-21de-44c9-9e94-cfe1ab7ac040","resourceVersion":"429","creationTimestamp":"2023-10-05T20:22:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"63fb560e-8695-4b28-89cf-d0b3759b9e96","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"63fb560e-8695-4b28-89cf-d0b3759b9e96\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1005 20:22:40.238939  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:40.238970  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:40.238981  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:40.238993  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:40.241661  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:40.241695  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:40.241704  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:40.241712  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:40 GMT
	I1005 20:22:40.241719  427001 round_trippers.go:580]     Audit-Id: ae582028-83af-4daa-ac2f-a1852a1c8adf
	I1005 20:22:40.241728  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:40.241736  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:40.241752  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:40.241907  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:22:40.242412  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nctb6
	I1005 20:22:40.242434  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:40.242445  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:40.242461  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:40.244656  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:40.244676  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:40.244683  427001 round_trippers.go:580]     Audit-Id: 57311c94-bfba-4484-948a-8fcf77502061
	I1005 20:22:40.244688  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:40.244700  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:40.244710  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:40.244726  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:40.244735  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:40 GMT
	I1005 20:22:40.244931  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nctb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6db951fc-21de-44c9-9e94-cfe1ab7ac040","resourceVersion":"429","creationTimestamp":"2023-10-05T20:22:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"63fb560e-8695-4b28-89cf-d0b3759b9e96","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"63fb560e-8695-4b28-89cf-d0b3759b9e96\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1005 20:22:40.245531  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:40.245555  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:40.245568  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:40.245581  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:40.247495  427001 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1005 20:22:40.247512  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:40.247519  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:40.247525  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:40.247531  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:40.247536  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:40 GMT
	I1005 20:22:40.247541  427001 round_trippers.go:580]     Audit-Id: 10edef9a-1ef7-4028-abdd-dfa109f90694
	I1005 20:22:40.247550  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:40.247964  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:22:40.748868  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nctb6
	I1005 20:22:40.748896  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:40.748904  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:40.748910  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:40.751365  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:40.751394  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:40.751405  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:40.751413  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:40.751420  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:40.751429  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:40 GMT
	I1005 20:22:40.751438  427001 round_trippers.go:580]     Audit-Id: c1e3d3d1-ff05-4f7b-8b3e-7357d74b4642
	I1005 20:22:40.751452  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:40.751658  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nctb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6db951fc-21de-44c9-9e94-cfe1ab7ac040","resourceVersion":"429","creationTimestamp":"2023-10-05T20:22:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"63fb560e-8695-4b28-89cf-d0b3759b9e96","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"63fb560e-8695-4b28-89cf-d0b3759b9e96\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1005 20:22:40.752243  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:40.752260  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:40.752276  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:40.752287  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:40.754725  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:40.754745  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:40.754753  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:40.754759  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:40.754764  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:40.754769  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:40 GMT
	I1005 20:22:40.754774  427001 round_trippers.go:580]     Audit-Id: 09b11a36-80b4-43fa-89b2-2ef510824a61
	I1005 20:22:40.754779  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:40.754939  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:22:41.248590  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nctb6
	I1005 20:22:41.248617  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:41.248625  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:41.248631  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:41.251236  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:41.251264  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:41.251273  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:41.251281  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:41 GMT
	I1005 20:22:41.251289  427001 round_trippers.go:580]     Audit-Id: e84668eb-2287-4e92-90d4-8c51013c52b8
	I1005 20:22:41.251296  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:41.251304  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:41.251316  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:41.251446  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nctb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6db951fc-21de-44c9-9e94-cfe1ab7ac040","resourceVersion":"442","creationTimestamp":"2023-10-05T20:22:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"63fb560e-8695-4b28-89cf-d0b3759b9e96","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"63fb560e-8695-4b28-89cf-d0b3759b9e96\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1005 20:22:41.251925  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:41.251939  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:41.251947  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:41.251953  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:41.254205  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:41.254225  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:41.254232  427001 round_trippers.go:580]     Audit-Id: 948f4a0e-5951-443d-928d-eeae7ab2c3a7
	I1005 20:22:41.254238  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:41.254243  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:41.254248  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:41.254253  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:41.254258  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:41 GMT
	I1005 20:22:41.254405  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:22:41.254868  427001 pod_ready.go:92] pod "coredns-5dd5756b68-nctb6" in "kube-system" namespace has status "Ready":"True"
	I1005 20:22:41.254896  427001 pod_ready.go:81] duration metric: took 1.019314423s waiting for pod "coredns-5dd5756b68-nctb6" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:41.254915  427001 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:41.254991  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-401792
	I1005 20:22:41.255003  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:41.255014  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:41.255027  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:41.257371  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:41.257397  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:41.257406  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:41 GMT
	I1005 20:22:41.257414  427001 round_trippers.go:580]     Audit-Id: 713ea0ac-c0f3-44ad-bf52-8e82953e8225
	I1005 20:22:41.257422  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:41.257431  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:41.257440  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:41.257453  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:41.257557  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-401792","namespace":"kube-system","uid":"44ad3fe4-b132-45ea-93d3-35a3740a12ea","resourceVersion":"317","creationTimestamp":"2023-10-05T20:21:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"d22baac935db4c8bd25db8b27c0e22ad","kubernetes.io/config.mirror":"d22baac935db4c8bd25db8b27c0e22ad","kubernetes.io/config.seen":"2023-10-05T20:21:55.839321424Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1005 20:22:41.257975  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:41.257991  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:41.258002  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:41.258010  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:41.260352  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:41.260372  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:41.260379  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:41.260384  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:41 GMT
	I1005 20:22:41.260390  427001 round_trippers.go:580]     Audit-Id: c0b3ca28-1f6e-4daa-a40b-3e3fa07f888a
	I1005 20:22:41.260395  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:41.260400  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:41.260406  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:41.260566  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:22:41.260945  427001 pod_ready.go:92] pod "etcd-multinode-401792" in "kube-system" namespace has status "Ready":"True"
	I1005 20:22:41.260963  427001 pod_ready.go:81] duration metric: took 6.039456ms waiting for pod "etcd-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:41.260980  427001 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:41.261047  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-401792
	I1005 20:22:41.261057  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:41.261068  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:41.261078  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:41.263312  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:41.263335  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:41.263345  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:41.263353  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:41 GMT
	I1005 20:22:41.263361  427001 round_trippers.go:580]     Audit-Id: c09a8f9c-69b4-4e0b-a037-f3a78845feb0
	I1005 20:22:41.263370  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:41.263383  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:41.263398  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:41.263518  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-401792","namespace":"kube-system","uid":"8b0de222-02fa-4bd6-b82e-e4b5e09908ec","resourceVersion":"408","creationTimestamp":"2023-10-05T20:21:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"9d50d9a0bb9882a5f98fd50755a0d758","kubernetes.io/config.mirror":"9d50d9a0bb9882a5f98fd50755a0d758","kubernetes.io/config.seen":"2023-10-05T20:21:55.839327911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1005 20:22:41.263961  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:41.263976  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:41.263987  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:41.263996  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:41.265991  427001 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1005 20:22:41.266017  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:41.266027  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:41 GMT
	I1005 20:22:41.266036  427001 round_trippers.go:580]     Audit-Id: e9d48eda-cd7b-4b24-b96d-7ece2f1db68e
	I1005 20:22:41.266045  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:41.266061  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:41.266077  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:41.266085  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:41.266191  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:22:41.266548  427001 pod_ready.go:92] pod "kube-apiserver-multinode-401792" in "kube-system" namespace has status "Ready":"True"
	I1005 20:22:41.266565  427001 pod_ready.go:81] duration metric: took 5.577269ms waiting for pod "kube-apiserver-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:41.266576  427001 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:41.266647  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-401792
	I1005 20:22:41.266658  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:41.266664  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:41.266673  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:41.268874  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:41.268895  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:41.268904  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:41 GMT
	I1005 20:22:41.268912  427001 round_trippers.go:580]     Audit-Id: feb2cac6-b0f9-4470-90b7-a058af093c75
	I1005 20:22:41.268920  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:41.268928  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:41.268936  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:41.268953  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:41.269141  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-401792","namespace":"kube-system","uid":"99f9133a-d7c6-4415-9d5d-d215ed75bc7b","resourceVersion":"311","creationTimestamp":"2023-10-05T20:21:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f3ca30035e4a8deac072828f388eccd","kubernetes.io/config.mirror":"4f3ca30035e4a8deac072828f388eccd","kubernetes.io/config.seen":"2023-10-05T20:21:49.928194447Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1005 20:22:41.421917  427001 request.go:629] Waited for 152.314986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:41.421986  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:41.421991  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:41.422000  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:41.422010  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:41.424374  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:41.424399  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:41.424409  427001 round_trippers.go:580]     Audit-Id: 311c5af4-5f4a-4a7b-affe-3dbed38f4675
	I1005 20:22:41.424416  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:41.424423  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:41.424431  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:41.424438  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:41.424448  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:41 GMT
	I1005 20:22:41.424613  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:22:41.424941  427001 pod_ready.go:92] pod "kube-controller-manager-multinode-401792" in "kube-system" namespace has status "Ready":"True"
	I1005 20:22:41.424959  427001 pod_ready.go:81] duration metric: took 158.36903ms waiting for pod "kube-controller-manager-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:41.424984  427001 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l9dpz" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:41.622442  427001 request.go:629] Waited for 197.378124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l9dpz
	I1005 20:22:41.622505  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l9dpz
	I1005 20:22:41.622509  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:41.622517  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:41.622523  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:41.624973  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:41.624995  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:41.625002  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:41.625008  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:41.625016  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:41.625023  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:41 GMT
	I1005 20:22:41.625030  427001 round_trippers.go:580]     Audit-Id: 66d8698b-f476-498a-85fd-53277584ec34
	I1005 20:22:41.625037  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:41.625217  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-l9dpz","generateName":"kube-proxy-","namespace":"kube-system","uid":"386ee581-e207-45ad-a08c-86a0804a2233","resourceVersion":"409","creationTimestamp":"2023-10-05T20:22:08Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b6c80ab9-89a2-4cdd-af70-bbfa2d07f2c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6c80ab9-89a2-4cdd-af70-bbfa2d07f2c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1005 20:22:41.822058  427001 request.go:629] Waited for 196.374868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:41.822137  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:41.822142  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:41.822149  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:41.822155  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:41.824493  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:41.824521  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:41.824533  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:41.824542  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:41.824551  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:41 GMT
	I1005 20:22:41.824560  427001 round_trippers.go:580]     Audit-Id: e211692a-b9de-469a-9d8c-b0a456a35a8e
	I1005 20:22:41.824572  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:41.824585  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:41.824709  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:22:41.825067  427001 pod_ready.go:92] pod "kube-proxy-l9dpz" in "kube-system" namespace has status "Ready":"True"
	I1005 20:22:41.825084  427001 pod_ready.go:81] duration metric: took 400.090031ms waiting for pod "kube-proxy-l9dpz" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:41.825094  427001 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:42.021478  427001 request.go:629] Waited for 196.284537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-401792
	I1005 20:22:42.021547  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-401792
	I1005 20:22:42.021552  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:42.021561  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:42.021567  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:42.024034  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:42.024063  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:42.024074  427001 round_trippers.go:580]     Audit-Id: 6a9ad1c4-73ba-4f3c-a750-9fd8aa5206d0
	I1005 20:22:42.024083  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:42.024092  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:42.024099  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:42.024104  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:42.024109  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:42 GMT
	I1005 20:22:42.024262  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-401792","namespace":"kube-system","uid":"33043544-61f1-4457-b66d-11bfdac4a024","resourceVersion":"314","creationTimestamp":"2023-10-05T20:21:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1f345bfdbc551b09bcec71a1eb70094b","kubernetes.io/config.mirror":"1f345bfdbc551b09bcec71a1eb70094b","kubernetes.io/config.seen":"2023-10-05T20:21:55.839330891Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1005 20:22:42.222102  427001 request.go:629] Waited for 197.412684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:42.222174  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:22:42.222180  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:42.222195  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:42.222207  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:42.224743  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:42.224764  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:42.224774  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:42.224779  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:42.224794  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:42 GMT
	I1005 20:22:42.224801  427001 round_trippers.go:580]     Audit-Id: 3e7017e2-8f8f-4363-b6b9-f29e6076b4d0
	I1005 20:22:42.224811  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:42.224819  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:42.224927  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:22:42.225243  427001 pod_ready.go:92] pod "kube-scheduler-multinode-401792" in "kube-system" namespace has status "Ready":"True"
	I1005 20:22:42.225257  427001 pod_ready.go:81] duration metric: took 400.156047ms waiting for pod "kube-scheduler-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:22:42.225268  427001 pod_ready.go:38] duration metric: took 1.999677728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:22:42.225285  427001 api_server.go:52] waiting for apiserver process to appear ...
	I1005 20:22:42.225331  427001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:22:42.235759  427001 command_runner.go:130] > 1437
	I1005 20:22:42.236572  427001 api_server.go:72] duration metric: took 34.098308387s to wait for apiserver process to appear ...
	I1005 20:22:42.236600  427001 api_server.go:88] waiting for apiserver healthz status ...
	I1005 20:22:42.236621  427001 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1005 20:22:42.241812  427001 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1005 20:22:42.241895  427001 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1005 20:22:42.241904  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:42.241912  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:42.241919  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:42.242914  427001 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1005 20:22:42.242933  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:42.242940  427001 round_trippers.go:580]     Content-Length: 263
	I1005 20:22:42.242945  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:42 GMT
	I1005 20:22:42.242951  427001 round_trippers.go:580]     Audit-Id: 2b13efa0-766f-40a6-8e17-4d1cab60dd87
	I1005 20:22:42.242958  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:42.242967  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:42.242972  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:42.242987  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:42.243006  427001 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1005 20:22:42.243124  427001 api_server.go:141] control plane version: v1.28.2
	I1005 20:22:42.243144  427001 api_server.go:131] duration metric: took 6.537127ms to wait for apiserver health ...
	I1005 20:22:42.243153  427001 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 20:22:42.421583  427001 request.go:629] Waited for 178.325352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 20:22:42.421669  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 20:22:42.421681  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:42.421693  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:42.421703  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:42.425357  427001 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 20:22:42.425388  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:42.425397  427001 round_trippers.go:580]     Audit-Id: 37f0e075-9e0c-45d3-9553-52aac0d14801
	I1005 20:22:42.425403  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:42.425408  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:42.425415  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:42.425420  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:42.425427  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:42 GMT
	I1005 20:22:42.425916  427001 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nctb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6db951fc-21de-44c9-9e94-cfe1ab7ac040","resourceVersion":"442","creationTimestamp":"2023-10-05T20:22:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"63fb560e-8695-4b28-89cf-d0b3759b9e96","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"63fb560e-8695-4b28-89cf-d0b3759b9e96\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1005 20:22:42.427808  427001 system_pods.go:59] 8 kube-system pods found
	I1005 20:22:42.427841  427001 system_pods.go:61] "coredns-5dd5756b68-nctb6" [6db951fc-21de-44c9-9e94-cfe1ab7ac040] Running
	I1005 20:22:42.427848  427001 system_pods.go:61] "etcd-multinode-401792" [44ad3fe4-b132-45ea-93d3-35a3740a12ea] Running
	I1005 20:22:42.427859  427001 system_pods.go:61] "kindnet-fnck9" [75b1ee02-b425-4768-aeda-451e53baaaa6] Running
	I1005 20:22:42.427872  427001 system_pods.go:61] "kube-apiserver-multinode-401792" [8b0de222-02fa-4bd6-b82e-e4b5e09908ec] Running
	I1005 20:22:42.427880  427001 system_pods.go:61] "kube-controller-manager-multinode-401792" [99f9133a-d7c6-4415-9d5d-d215ed75bc7b] Running
	I1005 20:22:42.427888  427001 system_pods.go:61] "kube-proxy-l9dpz" [386ee581-e207-45ad-a08c-86a0804a2233] Running
	I1005 20:22:42.427893  427001 system_pods.go:61] "kube-scheduler-multinode-401792" [33043544-61f1-4457-b66d-11bfdac4a024] Running
	I1005 20:22:42.427900  427001 system_pods.go:61] "storage-provisioner" [55fb2b0c-b3ba-4b56-b893-95190206e5ff] Running
	I1005 20:22:42.427907  427001 system_pods.go:74] duration metric: took 184.748839ms to wait for pod list to return data ...
	I1005 20:22:42.427917  427001 default_sa.go:34] waiting for default service account to be created ...
	I1005 20:22:42.622357  427001 request.go:629] Waited for 194.346881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1005 20:22:42.622431  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1005 20:22:42.622436  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:42.622445  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:42.622451  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:42.625022  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:42.625045  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:42.625053  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:42 GMT
	I1005 20:22:42.625058  427001 round_trippers.go:580]     Audit-Id: 1f400834-8163-4118-be45-8bb99b26fb96
	I1005 20:22:42.625063  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:42.625068  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:42.625073  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:42.625078  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:42.625084  427001 round_trippers.go:580]     Content-Length: 261
	I1005 20:22:42.625114  427001 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"48ab7218-b2d1-4916-8a71-5a53cdabcd91","resourceVersion":"331","creationTimestamp":"2023-10-05T20:22:07Z"}}]}
	I1005 20:22:42.625331  427001 default_sa.go:45] found service account: "default"
	I1005 20:22:42.625350  427001 default_sa.go:55] duration metric: took 197.421518ms for default service account to be created ...
	I1005 20:22:42.625363  427001 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 20:22:42.821810  427001 request.go:629] Waited for 196.352531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 20:22:42.821876  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 20:22:42.821881  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:42.821889  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:42.821900  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:42.825278  427001 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 20:22:42.825300  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:42.825307  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:42 GMT
	I1005 20:22:42.825312  427001 round_trippers.go:580]     Audit-Id: de2cdbd9-28b8-48a9-97a0-fde19231e8fe
	I1005 20:22:42.825317  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:42.825322  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:42.825327  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:42.825332  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:42.825827  427001 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nctb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6db951fc-21de-44c9-9e94-cfe1ab7ac040","resourceVersion":"442","creationTimestamp":"2023-10-05T20:22:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"63fb560e-8695-4b28-89cf-d0b3759b9e96","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"63fb560e-8695-4b28-89cf-d0b3759b9e96\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1005 20:22:42.827661  427001 system_pods.go:86] 8 kube-system pods found
	I1005 20:22:42.827691  427001 system_pods.go:89] "coredns-5dd5756b68-nctb6" [6db951fc-21de-44c9-9e94-cfe1ab7ac040] Running
	I1005 20:22:42.827701  427001 system_pods.go:89] "etcd-multinode-401792" [44ad3fe4-b132-45ea-93d3-35a3740a12ea] Running
	I1005 20:22:42.827707  427001 system_pods.go:89] "kindnet-fnck9" [75b1ee02-b425-4768-aeda-451e53baaaa6] Running
	I1005 20:22:42.827714  427001 system_pods.go:89] "kube-apiserver-multinode-401792" [8b0de222-02fa-4bd6-b82e-e4b5e09908ec] Running
	I1005 20:22:42.827721  427001 system_pods.go:89] "kube-controller-manager-multinode-401792" [99f9133a-d7c6-4415-9d5d-d215ed75bc7b] Running
	I1005 20:22:42.827728  427001 system_pods.go:89] "kube-proxy-l9dpz" [386ee581-e207-45ad-a08c-86a0804a2233] Running
	I1005 20:22:42.827738  427001 system_pods.go:89] "kube-scheduler-multinode-401792" [33043544-61f1-4457-b66d-11bfdac4a024] Running
	I1005 20:22:42.827749  427001 system_pods.go:89] "storage-provisioner" [55fb2b0c-b3ba-4b56-b893-95190206e5ff] Running
	I1005 20:22:42.827759  427001 system_pods.go:126] duration metric: took 202.385864ms to wait for k8s-apps to be running ...
	I1005 20:22:42.827778  427001 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 20:22:42.827837  427001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:22:42.839586  427001 system_svc.go:56] duration metric: took 11.79363ms WaitForService to wait for kubelet.
	I1005 20:22:42.839614  427001 kubeadm.go:581] duration metric: took 34.701358444s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 20:22:42.839636  427001 node_conditions.go:102] verifying NodePressure condition ...
	I1005 20:22:43.022076  427001 request.go:629] Waited for 182.342431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1005 20:22:43.022158  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1005 20:22:43.022163  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:43.022172  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:43.022179  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:43.024714  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:43.024737  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:43.024744  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:43 GMT
	I1005 20:22:43.024750  427001 round_trippers.go:580]     Audit-Id: 4fb84dc7-6bf8-46a2-9bd0-84cdf59bdbb0
	I1005 20:22:43.024755  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:43.024760  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:43.024766  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:43.024776  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:43.024901  427001 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I1005 20:22:43.025290  427001 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1005 20:22:43.025314  427001 node_conditions.go:123] node cpu capacity is 8
	I1005 20:22:43.025336  427001 node_conditions.go:105] duration metric: took 185.694775ms to run NodePressure ...
	I1005 20:22:43.025351  427001 start.go:228] waiting for startup goroutines ...
	I1005 20:22:43.025371  427001 start.go:233] waiting for cluster config update ...
	I1005 20:22:43.025383  427001 start.go:242] writing updated cluster config ...
	I1005 20:22:43.027374  427001 out.go:177] 
	I1005 20:22:43.029228  427001 config.go:182] Loaded profile config "multinode-401792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:22:43.029327  427001 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/config.json ...
	I1005 20:22:43.031192  427001 out.go:177] * Starting worker node multinode-401792-m02 in cluster multinode-401792
	I1005 20:22:43.032486  427001 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 20:22:43.033878  427001 out.go:177] * Pulling base image ...
	I1005 20:22:43.035792  427001 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 20:22:43.035824  427001 cache.go:57] Caching tarball of preloaded images
	I1005 20:22:43.035890  427001 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:22:43.035979  427001 preload.go:174] Found /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1005 20:22:43.035996  427001 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1005 20:22:43.036108  427001 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/config.json ...
	I1005 20:22:43.054035  427001 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 20:22:43.054066  427001 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 20:22:43.054123  427001 cache.go:195] Successfully downloaded all kic artifacts
	I1005 20:22:43.054166  427001 start.go:365] acquiring machines lock for multinode-401792-m02: {Name:mk468bd4d239b9d50aaf004be0693150455818a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:22:43.054293  427001 start.go:369] acquired machines lock for "multinode-401792-m02" in 103.899µs
	I1005 20:22:43.054326  427001 start.go:93] Provisioning new machine with config: &{Name:multinode-401792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-401792 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1005 20:22:43.054450  427001 start.go:125] createHost starting for "m02" (driver="docker")
	I1005 20:22:43.057223  427001 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1005 20:22:43.057369  427001 start.go:159] libmachine.API.Create for "multinode-401792" (driver="docker")
	I1005 20:22:43.057415  427001 client.go:168] LocalClient.Create starting
	I1005 20:22:43.057502  427001 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem
	I1005 20:22:43.057548  427001 main.go:141] libmachine: Decoding PEM data...
	I1005 20:22:43.057573  427001 main.go:141] libmachine: Parsing certificate...
	I1005 20:22:43.057649  427001 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem
	I1005 20:22:43.057680  427001 main.go:141] libmachine: Decoding PEM data...
	I1005 20:22:43.057698  427001 main.go:141] libmachine: Parsing certificate...
	I1005 20:22:43.057943  427001 cli_runner.go:164] Run: docker network inspect multinode-401792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:22:43.074874  427001 network_create.go:77] Found existing network {name:multinode-401792 subnet:0xc000a149f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1005 20:22:43.074917  427001 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-401792-m02" container
	I1005 20:22:43.074992  427001 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 20:22:43.092966  427001 cli_runner.go:164] Run: docker volume create multinode-401792-m02 --label name.minikube.sigs.k8s.io=multinode-401792-m02 --label created_by.minikube.sigs.k8s.io=true
	I1005 20:22:43.111635  427001 oci.go:103] Successfully created a docker volume multinode-401792-m02
	I1005 20:22:43.111713  427001 cli_runner.go:164] Run: docker run --rm --name multinode-401792-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-401792-m02 --entrypoint /usr/bin/test -v multinode-401792-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 20:22:43.687698  427001 oci.go:107] Successfully prepared a docker volume multinode-401792-m02
	I1005 20:22:43.687754  427001 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 20:22:43.687784  427001 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 20:22:43.687870  427001 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-401792-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 20:22:48.892173  427001 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-401792-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (5.204248767s)
	I1005 20:22:48.892208  427001 kic.go:199] duration metric: took 5.204421 seconds to extract preloaded images to volume
	W1005 20:22:48.892364  427001 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 20:22:48.892454  427001 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 20:22:48.949884  427001 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-401792-m02 --name multinode-401792-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-401792-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-401792-m02 --network multinode-401792 --ip 192.168.58.3 --volume multinode-401792-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 20:22:49.274409  427001 cli_runner.go:164] Run: docker container inspect multinode-401792-m02 --format={{.State.Running}}
	I1005 20:22:49.293481  427001 cli_runner.go:164] Run: docker container inspect multinode-401792-m02 --format={{.State.Status}}
	I1005 20:22:49.312571  427001 cli_runner.go:164] Run: docker exec multinode-401792-m02 stat /var/lib/dpkg/alternatives/iptables
	I1005 20:22:49.354771  427001 oci.go:144] the created container "multinode-401792-m02" has a running status.
	I1005 20:22:49.354808  427001 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792-m02/id_rsa...
	I1005 20:22:49.495059  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1005 20:22:49.495130  427001 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 20:22:49.515953  427001 cli_runner.go:164] Run: docker container inspect multinode-401792-m02 --format={{.State.Status}}
	I1005 20:22:49.533392  427001 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 20:22:49.533416  427001 kic_runner.go:114] Args: [docker exec --privileged multinode-401792-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 20:22:49.628778  427001 cli_runner.go:164] Run: docker container inspect multinode-401792-m02 --format={{.State.Status}}
	I1005 20:22:49.645869  427001 machine.go:88] provisioning docker machine ...
	I1005 20:22:49.645957  427001 ubuntu.go:169] provisioning hostname "multinode-401792-m02"
	I1005 20:22:49.646037  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792-m02
	I1005 20:22:49.664941  427001 main.go:141] libmachine: Using SSH client type: native
	I1005 20:22:49.665431  427001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1005 20:22:49.665452  427001 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-401792-m02 && echo "multinode-401792-m02" | sudo tee /etc/hostname
	I1005 20:22:49.666295  427001 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47286->127.0.0.1:33154: read: connection reset by peer
	I1005 20:22:52.814865  427001 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-401792-m02
	
	I1005 20:22:52.814953  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792-m02
	I1005 20:22:52.832487  427001 main.go:141] libmachine: Using SSH client type: native
	I1005 20:22:52.832851  427001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1005 20:22:52.832878  427001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-401792-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-401792-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-401792-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 20:22:52.967559  427001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:22:52.967600  427001 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-334135/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-334135/.minikube}
	I1005 20:22:52.967624  427001 ubuntu.go:177] setting up certificates
	I1005 20:22:52.967637  427001 provision.go:83] configureAuth start
	I1005 20:22:52.967695  427001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-401792-m02
	I1005 20:22:52.985019  427001 provision.go:138] copyHostCerts
	I1005 20:22:52.985072  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem
	I1005 20:22:52.985114  427001 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem, removing ...
	I1005 20:22:52.985127  427001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem
	I1005 20:22:52.985198  427001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem (1078 bytes)
	I1005 20:22:52.985279  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem
	I1005 20:22:52.985298  427001 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem, removing ...
	I1005 20:22:52.985305  427001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem
	I1005 20:22:52.985337  427001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem (1123 bytes)
	I1005 20:22:52.985429  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem
	I1005 20:22:52.985452  427001 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem, removing ...
	I1005 20:22:52.985460  427001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem
	I1005 20:22:52.985484  427001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem (1675 bytes)
	I1005 20:22:52.985530  427001 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem org=jenkins.multinode-401792-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-401792-m02]
	I1005 20:22:53.139477  427001 provision.go:172] copyRemoteCerts
	I1005 20:22:53.139542  427001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 20:22:53.139579  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792-m02
	I1005 20:22:53.157608  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792-m02/id_rsa Username:docker}
	I1005 20:22:53.256628  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1005 20:22:53.256715  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1005 20:22:53.281104  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1005 20:22:53.281205  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1005 20:22:53.305794  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1005 20:22:53.305857  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1005 20:22:53.330694  427001 provision.go:86] duration metric: configureAuth took 363.037702ms
	I1005 20:22:53.330728  427001 ubuntu.go:193] setting minikube options for container-runtime
	I1005 20:22:53.330920  427001 config.go:182] Loaded profile config "multinode-401792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:22:53.331024  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792-m02
	I1005 20:22:53.348483  427001 main.go:141] libmachine: Using SSH client type: native
	I1005 20:22:53.348832  427001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1005 20:22:53.348850  427001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 20:22:53.578526  427001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 20:22:53.578559  427001 machine.go:91] provisioned docker machine in 3.932661639s
	I1005 20:22:53.578571  427001 client.go:171] LocalClient.Create took 10.521146736s
	I1005 20:22:53.578595  427001 start.go:167] duration metric: libmachine.API.Create for "multinode-401792" took 10.521225985s
	I1005 20:22:53.578606  427001 start.go:300] post-start starting for "multinode-401792-m02" (driver="docker")
	I1005 20:22:53.578623  427001 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 20:22:53.578699  427001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 20:22:53.578749  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792-m02
	I1005 20:22:53.596766  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792-m02/id_rsa Username:docker}
	I1005 20:22:53.692724  427001 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 20:22:53.696355  427001 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1005 20:22:53.696379  427001 command_runner.go:130] > NAME="Ubuntu"
	I1005 20:22:53.696386  427001 command_runner.go:130] > VERSION_ID="22.04"
	I1005 20:22:53.696391  427001 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1005 20:22:53.696396  427001 command_runner.go:130] > VERSION_CODENAME=jammy
	I1005 20:22:53.696400  427001 command_runner.go:130] > ID=ubuntu
	I1005 20:22:53.696404  427001 command_runner.go:130] > ID_LIKE=debian
	I1005 20:22:53.696408  427001 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1005 20:22:53.696413  427001 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1005 20:22:53.696420  427001 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1005 20:22:53.696426  427001 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1005 20:22:53.696431  427001 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1005 20:22:53.696520  427001 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 20:22:53.696546  427001 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 20:22:53.696554  427001 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 20:22:53.696566  427001 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 20:22:53.696585  427001 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/addons for local assets ...
	I1005 20:22:53.696663  427001 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/files for local assets ...
	I1005 20:22:53.696748  427001 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem -> 3409292.pem in /etc/ssl/certs
	I1005 20:22:53.696763  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem -> /etc/ssl/certs/3409292.pem
	I1005 20:22:53.696862  427001 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 20:22:53.705918  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem --> /etc/ssl/certs/3409292.pem (1708 bytes)
	I1005 20:22:53.730266  427001 start.go:303] post-start completed in 151.637045ms
	I1005 20:22:53.730672  427001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-401792-m02
	I1005 20:22:53.749898  427001 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/config.json ...
	I1005 20:22:53.750207  427001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:22:53.750270  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792-m02
	I1005 20:22:53.767899  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792-m02/id_rsa Username:docker}
	I1005 20:22:53.860361  427001 command_runner.go:130] > 19%!
	(MISSING)I1005 20:22:53.860451  427001 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 20:22:53.865008  427001 command_runner.go:130] > 237G
	I1005 20:22:53.865044  427001 start.go:128] duration metric: createHost completed in 10.81058311s
	I1005 20:22:53.865053  427001 start.go:83] releasing machines lock for "multinode-401792-m02", held for 10.810745335s
	I1005 20:22:53.865120  427001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-401792-m02
	I1005 20:22:53.885229  427001 out.go:177] * Found network options:
	I1005 20:22:53.886943  427001 out.go:177]   - NO_PROXY=192.168.58.2
	W1005 20:22:53.888420  427001 proxy.go:119] fail to check proxy env: Error ip not in block
	W1005 20:22:53.888477  427001 proxy.go:119] fail to check proxy env: Error ip not in block
	I1005 20:22:53.888573  427001 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 20:22:53.888630  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792-m02
	I1005 20:22:53.888637  427001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 20:22:53.888694  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792-m02
	I1005 20:22:53.906756  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792-m02/id_rsa Username:docker}
	I1005 20:22:53.906753  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792-m02/id_rsa Username:docker}
	I1005 20:22:54.089483  427001 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1005 20:22:54.140417  427001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 20:22:54.145404  427001 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1005 20:22:54.145439  427001 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1005 20:22:54.145452  427001 command_runner.go:130] > Device: b0h/176d	Inode: 1299532     Links: 1
	I1005 20:22:54.145462  427001 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 20:22:54.145476  427001 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1005 20:22:54.145491  427001 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1005 20:22:54.145500  427001 command_runner.go:130] > Change: 2023-10-05 20:03:12.987873402 +0000
	I1005 20:22:54.145508  427001 command_runner.go:130] >  Birth: 2023-10-05 20:03:12.987873402 +0000
	I1005 20:22:54.145620  427001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:22:54.165992  427001 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 20:22:54.166066  427001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:22:54.195238  427001 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1005 20:22:54.195321  427001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1005 20:22:54.195334  427001 start.go:469] detecting cgroup driver to use...
	I1005 20:22:54.195376  427001 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:22:54.195437  427001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 20:22:54.210844  427001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 20:22:54.222057  427001 docker.go:197] disabling cri-docker service (if available) ...
	I1005 20:22:54.222125  427001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 20:22:54.235623  427001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 20:22:54.250310  427001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 20:22:54.325514  427001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 20:22:54.339574  427001 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1005 20:22:54.411668  427001 docker.go:213] disabling docker service ...
	I1005 20:22:54.411732  427001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 20:22:54.431567  427001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 20:22:54.443717  427001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 20:22:54.529020  427001 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1005 20:22:54.529088  427001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 20:22:54.617653  427001 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1005 20:22:54.617752  427001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 20:22:54.629846  427001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:22:54.646088  427001 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1005 20:22:54.646133  427001 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1005 20:22:54.646196  427001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:22:54.656264  427001 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1005 20:22:54.656333  427001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:22:54.666258  427001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:22:54.676404  427001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:22:54.686339  427001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 20:22:54.696234  427001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 20:22:54.704259  427001 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1005 20:22:54.704861  427001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 20:22:54.714208  427001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:22:54.794209  427001 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1005 20:22:54.909416  427001 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1005 20:22:54.909482  427001 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1005 20:22:54.913022  427001 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1005 20:22:54.913052  427001 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1005 20:22:54.913064  427001 command_runner.go:130] > Device: bah/186d	Inode: 190         Links: 1
	I1005 20:22:54.913076  427001 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 20:22:54.913084  427001 command_runner.go:130] > Access: 2023-10-05 20:22:54.897263089 +0000
	I1005 20:22:54.913100  427001 command_runner.go:130] > Modify: 2023-10-05 20:22:54.897263089 +0000
	I1005 20:22:54.913110  427001 command_runner.go:130] > Change: 2023-10-05 20:22:54.897263089 +0000
	I1005 20:22:54.913115  427001 command_runner.go:130] >  Birth: -
	I1005 20:22:54.913156  427001 start.go:537] Will wait 60s for crictl version
	I1005 20:22:54.913196  427001 ssh_runner.go:195] Run: which crictl
	I1005 20:22:54.916553  427001 command_runner.go:130] > /usr/bin/crictl
	I1005 20:22:54.916686  427001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 20:22:54.952439  427001 command_runner.go:130] > Version:  0.1.0
	I1005 20:22:54.952462  427001 command_runner.go:130] > RuntimeName:  cri-o
	I1005 20:22:54.952466  427001 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1005 20:22:54.952471  427001 command_runner.go:130] > RuntimeApiVersion:  v1
	I1005 20:22:54.952489  427001 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1005 20:22:54.952555  427001 ssh_runner.go:195] Run: crio --version
	I1005 20:22:54.989449  427001 command_runner.go:130] > crio version 1.24.6
	I1005 20:22:54.989471  427001 command_runner.go:130] > Version:          1.24.6
	I1005 20:22:54.989478  427001 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1005 20:22:54.989482  427001 command_runner.go:130] > GitTreeState:     clean
	I1005 20:22:54.989488  427001 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1005 20:22:54.989492  427001 command_runner.go:130] > GoVersion:        go1.18.2
	I1005 20:22:54.989498  427001 command_runner.go:130] > Compiler:         gc
	I1005 20:22:54.989503  427001 command_runner.go:130] > Platform:         linux/amd64
	I1005 20:22:54.989507  427001 command_runner.go:130] > Linkmode:         dynamic
	I1005 20:22:54.989514  427001 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1005 20:22:54.989519  427001 command_runner.go:130] > SeccompEnabled:   true
	I1005 20:22:54.989523  427001 command_runner.go:130] > AppArmorEnabled:  false
	I1005 20:22:54.989596  427001 ssh_runner.go:195] Run: crio --version
	I1005 20:22:55.026743  427001 command_runner.go:130] > crio version 1.24.6
	I1005 20:22:55.026771  427001 command_runner.go:130] > Version:          1.24.6
	I1005 20:22:55.026782  427001 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1005 20:22:55.026788  427001 command_runner.go:130] > GitTreeState:     clean
	I1005 20:22:55.026797  427001 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1005 20:22:55.026803  427001 command_runner.go:130] > GoVersion:        go1.18.2
	I1005 20:22:55.026808  427001 command_runner.go:130] > Compiler:         gc
	I1005 20:22:55.026815  427001 command_runner.go:130] > Platform:         linux/amd64
	I1005 20:22:55.026823  427001 command_runner.go:130] > Linkmode:         dynamic
	I1005 20:22:55.026835  427001 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1005 20:22:55.026856  427001 command_runner.go:130] > SeccompEnabled:   true
	I1005 20:22:55.026868  427001 command_runner.go:130] > AppArmorEnabled:  false
	I1005 20:22:55.029706  427001 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1005 20:22:55.031255  427001 out.go:177]   - env NO_PROXY=192.168.58.2
	I1005 20:22:55.032786  427001 cli_runner.go:164] Run: docker network inspect multinode-401792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:22:55.049568  427001 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1005 20:22:55.053325  427001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:22:55.064408  427001 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792 for IP: 192.168.58.3
	I1005 20:22:55.064451  427001 certs.go:190] acquiring lock for shared ca certs: {Name:mk1be6ef34f8fc4cfa2ec636f9e6906c15e2096a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:22:55.064597  427001 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key
	I1005 20:22:55.064643  427001 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key
	I1005 20:22:55.064657  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1005 20:22:55.064671  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1005 20:22:55.064684  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1005 20:22:55.064703  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1005 20:22:55.064754  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929.pem (1338 bytes)
	W1005 20:22:55.064782  427001 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929_empty.pem, impossibly tiny 0 bytes
	I1005 20:22:55.064792  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 20:22:55.064817  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem (1078 bytes)
	I1005 20:22:55.064840  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem (1123 bytes)
	I1005 20:22:55.064863  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem (1675 bytes)
	I1005 20:22:55.064900  427001 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem (1708 bytes)
	I1005 20:22:55.064925  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem -> /usr/share/ca-certificates/3409292.pem
	I1005 20:22:55.064937  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:22:55.064949  427001 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929.pem -> /usr/share/ca-certificates/340929.pem
	I1005 20:22:55.065313  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 20:22:55.089167  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1005 20:22:55.112812  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 20:22:55.136209  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 20:22:55.159899  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem --> /usr/share/ca-certificates/3409292.pem (1708 bytes)
	I1005 20:22:55.183728  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 20:22:55.207265  427001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/certs/340929.pem --> /usr/share/ca-certificates/340929.pem (1338 bytes)
	I1005 20:22:55.231128  427001 ssh_runner.go:195] Run: openssl version
	I1005 20:22:55.236289  427001 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1005 20:22:55.236410  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340929.pem && ln -fs /usr/share/ca-certificates/340929.pem /etc/ssl/certs/340929.pem"
	I1005 20:22:55.245934  427001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340929.pem
	I1005 20:22:55.249564  427001 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  5 20:09 /usr/share/ca-certificates/340929.pem
	I1005 20:22:55.249605  427001 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 20:09 /usr/share/ca-certificates/340929.pem
	I1005 20:22:55.249659  427001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340929.pem
	I1005 20:22:55.256286  427001 command_runner.go:130] > 51391683
	I1005 20:22:55.256479  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340929.pem /etc/ssl/certs/51391683.0"
	I1005 20:22:55.266103  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3409292.pem && ln -fs /usr/share/ca-certificates/3409292.pem /etc/ssl/certs/3409292.pem"
	I1005 20:22:55.275783  427001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3409292.pem
	I1005 20:22:55.279498  427001 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  5 20:09 /usr/share/ca-certificates/3409292.pem
	I1005 20:22:55.279559  427001 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 20:09 /usr/share/ca-certificates/3409292.pem
	I1005 20:22:55.279615  427001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3409292.pem
	I1005 20:22:55.286380  427001 command_runner.go:130] > 3ec20f2e
	I1005 20:22:55.286458  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3409292.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 20:22:55.296158  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 20:22:55.305818  427001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:22:55.309492  427001 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  5 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:22:55.309539  427001 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:22:55.309584  427001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:22:55.316506  427001 command_runner.go:130] > b5213941
	I1005 20:22:55.316627  427001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 20:22:55.326317  427001 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 20:22:55.329935  427001 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 20:22:55.330010  427001 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 20:22:55.330145  427001 ssh_runner.go:195] Run: crio config
	I1005 20:22:55.370821  427001 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1005 20:22:55.370854  427001 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1005 20:22:55.370865  427001 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1005 20:22:55.370871  427001 command_runner.go:130] > #
	I1005 20:22:55.370882  427001 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1005 20:22:55.370893  427001 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1005 20:22:55.370912  427001 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1005 20:22:55.370934  427001 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1005 20:22:55.370941  427001 command_runner.go:130] > # reload'.
	I1005 20:22:55.370952  427001 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1005 20:22:55.370971  427001 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1005 20:22:55.370984  427001 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1005 20:22:55.370993  427001 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1005 20:22:55.370997  427001 command_runner.go:130] > [crio]
	I1005 20:22:55.371005  427001 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1005 20:22:55.371012  427001 command_runner.go:130] > # containers images, in this directory.
	I1005 20:22:55.371023  427001 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1005 20:22:55.371035  427001 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1005 20:22:55.371044  427001 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1005 20:22:55.371053  427001 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1005 20:22:55.371127  427001 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1005 20:22:55.371142  427001 command_runner.go:130] > # storage_driver = "vfs"
	I1005 20:22:55.371152  427001 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1005 20:22:55.371162  427001 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1005 20:22:55.371169  427001 command_runner.go:130] > # storage_option = [
	I1005 20:22:55.371175  427001 command_runner.go:130] > # ]
	I1005 20:22:55.371185  427001 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1005 20:22:55.371193  427001 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1005 20:22:55.371199  427001 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1005 20:22:55.371208  427001 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1005 20:22:55.371219  427001 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1005 20:22:55.371227  427001 command_runner.go:130] > # always happen on a node reboot
	I1005 20:22:55.371235  427001 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1005 20:22:55.371244  427001 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1005 20:22:55.371253  427001 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1005 20:22:55.371267  427001 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1005 20:22:55.371276  427001 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1005 20:22:55.371288  427001 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1005 20:22:55.371300  427001 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1005 20:22:55.371306  427001 command_runner.go:130] > # internal_wipe = true
	I1005 20:22:55.371316  427001 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1005 20:22:55.371326  427001 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1005 20:22:55.371336  427001 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1005 20:22:55.371345  427001 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1005 20:22:55.371356  427001 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1005 20:22:55.371363  427001 command_runner.go:130] > [crio.api]
	I1005 20:22:55.371374  427001 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1005 20:22:55.371388  427001 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1005 20:22:55.371395  427001 command_runner.go:130] > # IP address on which the stream server will listen.
	I1005 20:22:55.371403  427001 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1005 20:22:55.371415  427001 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1005 20:22:55.371430  427001 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1005 20:22:55.371440  427001 command_runner.go:130] > # stream_port = "0"
	I1005 20:22:55.371449  427001 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1005 20:22:55.371457  427001 command_runner.go:130] > # stream_enable_tls = false
	I1005 20:22:55.371471  427001 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1005 20:22:55.371486  427001 command_runner.go:130] > # stream_idle_timeout = ""
	I1005 20:22:55.371500  427001 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1005 20:22:55.371511  427001 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1005 20:22:55.371518  427001 command_runner.go:130] > # minutes.
	I1005 20:22:55.371524  427001 command_runner.go:130] > # stream_tls_cert = ""
	I1005 20:22:55.371535  427001 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1005 20:22:55.371550  427001 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1005 20:22:55.371561  427001 command_runner.go:130] > # stream_tls_key = ""
	I1005 20:22:55.371572  427001 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1005 20:22:55.371586  427001 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1005 20:22:55.371596  427001 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1005 20:22:55.371606  427001 command_runner.go:130] > # stream_tls_ca = ""
	I1005 20:22:55.371620  427001 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1005 20:22:55.371672  427001 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1005 20:22:55.371689  427001 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1005 20:22:55.371702  427001 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1005 20:22:55.371722  427001 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1005 20:22:55.371735  427001 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1005 20:22:55.371742  427001 command_runner.go:130] > [crio.runtime]
	I1005 20:22:55.371754  427001 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1005 20:22:55.371766  427001 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1005 20:22:55.371776  427001 command_runner.go:130] > # "nofile=1024:2048"
	I1005 20:22:55.371787  427001 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1005 20:22:55.371798  427001 command_runner.go:130] > # default_ulimits = [
	I1005 20:22:55.371803  427001 command_runner.go:130] > # ]
	I1005 20:22:55.371818  427001 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1005 20:22:55.371826  427001 command_runner.go:130] > # no_pivot = false
	I1005 20:22:55.371839  427001 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1005 20:22:55.371850  427001 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1005 20:22:55.371864  427001 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1005 20:22:55.371875  427001 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1005 20:22:55.371887  427001 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1005 20:22:55.371899  427001 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1005 20:22:55.371912  427001 command_runner.go:130] > # conmon = ""
	I1005 20:22:55.371921  427001 command_runner.go:130] > # Cgroup setting for conmon
	I1005 20:22:55.371932  427001 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1005 20:22:55.371944  427001 command_runner.go:130] > conmon_cgroup = "pod"
	I1005 20:22:55.371954  427001 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1005 20:22:55.371967  427001 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1005 20:22:55.371983  427001 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1005 20:22:55.371992  427001 command_runner.go:130] > # conmon_env = [
	I1005 20:22:55.371998  427001 command_runner.go:130] > # ]
	I1005 20:22:55.372007  427001 command_runner.go:130] > # Additional environment variables to set for all the
	I1005 20:22:55.372019  427001 command_runner.go:130] > # containers. These are overridden if set in the
	I1005 20:22:55.372033  427001 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1005 20:22:55.372043  427001 command_runner.go:130] > # default_env = [
	I1005 20:22:55.372053  427001 command_runner.go:130] > # ]
	I1005 20:22:55.372062  427001 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1005 20:22:55.372071  427001 command_runner.go:130] > # selinux = false
	I1005 20:22:55.372079  427001 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1005 20:22:55.372087  427001 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1005 20:22:55.372100  427001 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1005 20:22:55.372112  427001 command_runner.go:130] > # seccomp_profile = ""
	I1005 20:22:55.372121  427001 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1005 20:22:55.372134  427001 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1005 20:22:55.372143  427001 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1005 20:22:55.372150  427001 command_runner.go:130] > # which might increase security.
	I1005 20:22:55.372159  427001 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1005 20:22:55.372174  427001 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1005 20:22:55.372187  427001 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1005 20:22:55.372201  427001 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1005 20:22:55.372216  427001 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1005 20:22:55.372230  427001 command_runner.go:130] > # This option supports live configuration reload.
	I1005 20:22:55.372241  427001 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1005 20:22:55.372254  427001 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1005 20:22:55.372264  427001 command_runner.go:130] > # the cgroup blockio controller.
	I1005 20:22:55.372273  427001 command_runner.go:130] > # blockio_config_file = ""
	I1005 20:22:55.372287  427001 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1005 20:22:55.372297  427001 command_runner.go:130] > # irqbalance daemon.
	I1005 20:22:55.372310  427001 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1005 20:22:55.372324  427001 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1005 20:22:55.372336  427001 command_runner.go:130] > # This option supports live configuration reload.
	I1005 20:22:55.372346  427001 command_runner.go:130] > # rdt_config_file = ""
	I1005 20:22:55.372358  427001 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1005 20:22:55.372365  427001 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1005 20:22:55.372375  427001 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1005 20:22:55.372385  427001 command_runner.go:130] > # separate_pull_cgroup = ""
	I1005 20:22:55.372400  427001 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1005 20:22:55.372415  427001 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1005 20:22:55.372425  427001 command_runner.go:130] > # will be added.
	I1005 20:22:55.372436  427001 command_runner.go:130] > # default_capabilities = [
	I1005 20:22:55.372445  427001 command_runner.go:130] > # 	"CHOWN",
	I1005 20:22:55.372450  427001 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1005 20:22:55.372459  427001 command_runner.go:130] > # 	"FSETID",
	I1005 20:22:55.372468  427001 command_runner.go:130] > # 	"FOWNER",
	I1005 20:22:55.372475  427001 command_runner.go:130] > # 	"SETGID",
	I1005 20:22:55.372482  427001 command_runner.go:130] > # 	"SETUID",
	I1005 20:22:55.372493  427001 command_runner.go:130] > # 	"SETPCAP",
	I1005 20:22:55.372500  427001 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1005 20:22:55.372510  427001 command_runner.go:130] > # 	"KILL",
	I1005 20:22:55.372519  427001 command_runner.go:130] > # ]
	I1005 20:22:55.372534  427001 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1005 20:22:55.372545  427001 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1005 20:22:55.372554  427001 command_runner.go:130] > # add_inheritable_capabilities = true
	I1005 20:22:55.372568  427001 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1005 20:22:55.372582  427001 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1005 20:22:55.372592  427001 command_runner.go:130] > # default_sysctls = [
	I1005 20:22:55.372602  427001 command_runner.go:130] > # ]
	I1005 20:22:55.372615  427001 command_runner.go:130] > # List of devices on the host that a
	I1005 20:22:55.372667  427001 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1005 20:22:55.372684  427001 command_runner.go:130] > # allowed_devices = [
	I1005 20:22:55.372691  427001 command_runner.go:130] > # 	"/dev/fuse",
	I1005 20:22:55.372698  427001 command_runner.go:130] > # ]
	I1005 20:22:55.372707  427001 command_runner.go:130] > # List of additional devices. specified as
	I1005 20:22:55.372735  427001 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1005 20:22:55.372747  427001 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1005 20:22:55.372760  427001 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1005 20:22:55.372768  427001 command_runner.go:130] > # additional_devices = [
	I1005 20:22:55.372777  427001 command_runner.go:130] > # ]
	I1005 20:22:55.372789  427001 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1005 20:22:55.372800  427001 command_runner.go:130] > # cdi_spec_dirs = [
	I1005 20:22:55.372807  427001 command_runner.go:130] > # 	"/etc/cdi",
	I1005 20:22:55.372817  427001 command_runner.go:130] > # 	"/var/run/cdi",
	I1005 20:22:55.372827  427001 command_runner.go:130] > # ]
	I1005 20:22:55.372841  427001 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1005 20:22:55.372854  427001 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1005 20:22:55.372864  427001 command_runner.go:130] > # Defaults to false.
	I1005 20:22:55.372873  427001 command_runner.go:130] > # device_ownership_from_security_context = false
	I1005 20:22:55.372887  427001 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1005 20:22:55.372909  427001 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1005 20:22:55.372919  427001 command_runner.go:130] > # hooks_dir = [
	I1005 20:22:55.372931  427001 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1005 20:22:55.372940  427001 command_runner.go:130] > # ]
	I1005 20:22:55.372953  427001 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1005 20:22:55.372966  427001 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1005 20:22:55.372975  427001 command_runner.go:130] > # its default mounts from the following two files:
	I1005 20:22:55.372981  427001 command_runner.go:130] > #
	I1005 20:22:55.372995  427001 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1005 20:22:55.373009  427001 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1005 20:22:55.373022  427001 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1005 20:22:55.373031  427001 command_runner.go:130] > #
	I1005 20:22:55.373044  427001 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1005 20:22:55.373055  427001 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1005 20:22:55.373067  427001 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1005 20:22:55.373082  427001 command_runner.go:130] > #      only add mounts it finds in this file.
	I1005 20:22:55.373091  427001 command_runner.go:130] > #
	I1005 20:22:55.373102  427001 command_runner.go:130] > # default_mounts_file = ""
	I1005 20:22:55.373114  427001 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1005 20:22:55.373130  427001 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1005 20:22:55.373140  427001 command_runner.go:130] > # pids_limit = 0
	I1005 20:22:55.373150  427001 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1005 20:22:55.373162  427001 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1005 20:22:55.373176  427001 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1005 20:22:55.373193  427001 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1005 20:22:55.373203  427001 command_runner.go:130] > # log_size_max = -1
	I1005 20:22:55.373218  427001 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1005 20:22:55.373228  427001 command_runner.go:130] > # log_to_journald = false
	I1005 20:22:55.373238  427001 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1005 20:22:55.373249  427001 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1005 20:22:55.373262  427001 command_runner.go:130] > # Path to directory for container attach sockets.
	I1005 20:22:55.373275  427001 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1005 20:22:55.373287  427001 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1005 20:22:55.373297  427001 command_runner.go:130] > # bind_mount_prefix = ""
	I1005 20:22:55.373310  427001 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1005 20:22:55.373319  427001 command_runner.go:130] > # read_only = false
	I1005 20:22:55.373328  427001 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1005 20:22:55.373341  427001 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1005 20:22:55.373353  427001 command_runner.go:130] > # live configuration reload.
	I1005 20:22:55.373365  427001 command_runner.go:130] > # log_level = "info"
	I1005 20:22:55.373377  427001 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1005 20:22:55.373389  427001 command_runner.go:130] > # This option supports live configuration reload.
	I1005 20:22:55.373399  427001 command_runner.go:130] > # log_filter = ""
	I1005 20:22:55.373413  427001 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1005 20:22:55.373423  427001 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1005 20:22:55.373432  427001 command_runner.go:130] > # separated by comma.
	I1005 20:22:55.373443  427001 command_runner.go:130] > # uid_mappings = ""
	I1005 20:22:55.373456  427001 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1005 20:22:55.373471  427001 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1005 20:22:55.373481  427001 command_runner.go:130] > # separated by comma.
	I1005 20:22:55.373491  427001 command_runner.go:130] > # gid_mappings = ""
	I1005 20:22:55.373507  427001 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1005 20:22:55.373519  427001 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1005 20:22:55.373546  427001 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1005 20:22:55.373558  427001 command_runner.go:130] > # minimum_mappable_uid = -1
	I1005 20:22:55.373569  427001 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1005 20:22:55.373582  427001 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1005 20:22:55.373596  427001 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1005 20:22:55.373606  427001 command_runner.go:130] > # minimum_mappable_gid = -1
	I1005 20:22:55.373619  427001 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1005 20:22:55.373629  427001 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1005 20:22:55.373671  427001 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1005 20:22:55.373682  427001 command_runner.go:130] > # ctr_stop_timeout = 30
	I1005 20:22:55.373693  427001 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1005 20:22:55.373706  427001 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1005 20:22:55.373718  427001 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1005 20:22:55.373729  427001 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1005 20:22:55.373740  427001 command_runner.go:130] > # drop_infra_ctr = true
	I1005 20:22:55.373751  427001 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1005 20:22:55.373761  427001 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1005 20:22:55.373779  427001 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1005 20:22:55.373790  427001 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1005 20:22:55.373801  427001 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1005 20:22:55.373813  427001 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1005 20:22:55.373824  427001 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1005 20:22:55.373840  427001 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1005 20:22:55.373851  427001 command_runner.go:130] > # pinns_path = ""
	I1005 20:22:55.373861  427001 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1005 20:22:55.373874  427001 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1005 20:22:55.373888  427001 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1005 20:22:55.373900  427001 command_runner.go:130] > # default_runtime = "runc"
	I1005 20:22:55.373914  427001 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1005 20:22:55.373930  427001 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1005 20:22:55.373948  427001 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1005 20:22:55.373956  427001 command_runner.go:130] > # creation as a file is not desired either.
	I1005 20:22:55.373973  427001 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1005 20:22:55.373985  427001 command_runner.go:130] > # the hostname is being managed dynamically.
	I1005 20:22:55.373998  427001 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1005 20:22:55.374008  427001 command_runner.go:130] > # ]
	I1005 20:22:55.374022  427001 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1005 20:22:55.374036  427001 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1005 20:22:55.374050  427001 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1005 20:22:55.374059  427001 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1005 20:22:55.374067  427001 command_runner.go:130] > #
	I1005 20:22:55.374079  427001 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1005 20:22:55.374091  427001 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1005 20:22:55.374102  427001 command_runner.go:130] > #  runtime_type = "oci"
	I1005 20:22:55.374114  427001 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1005 20:22:55.374125  427001 command_runner.go:130] > #  privileged_without_host_devices = false
	I1005 20:22:55.374136  427001 command_runner.go:130] > #  allowed_annotations = []
	I1005 20:22:55.374143  427001 command_runner.go:130] > # Where:
	I1005 20:22:55.374150  427001 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1005 20:22:55.374163  427001 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1005 20:22:55.374177  427001 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1005 20:22:55.374191  427001 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1005 20:22:55.374201  427001 command_runner.go:130] > #   in $PATH.
	I1005 20:22:55.374215  427001 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1005 20:22:55.374227  427001 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1005 20:22:55.374240  427001 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1005 20:22:55.374247  427001 command_runner.go:130] > #   state.
	I1005 20:22:55.374255  427001 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1005 20:22:55.374268  427001 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1005 20:22:55.374282  427001 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1005 20:22:55.374295  427001 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1005 20:22:55.374309  427001 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1005 20:22:55.374323  427001 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1005 20:22:55.374334  427001 command_runner.go:130] > #   The currently recognized values are:
	I1005 20:22:55.374348  427001 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1005 20:22:55.374364  427001 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1005 20:22:55.374377  427001 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1005 20:22:55.374391  427001 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1005 20:22:55.374406  427001 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1005 20:22:55.374418  427001 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1005 20:22:55.374433  427001 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1005 20:22:55.374448  427001 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1005 20:22:55.374460  427001 command_runner.go:130] > #   should be moved to the container's cgroup
	I1005 20:22:55.374471  427001 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1005 20:22:55.374483  427001 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1005 20:22:55.374493  427001 command_runner.go:130] > runtime_type = "oci"
	I1005 20:22:55.374508  427001 command_runner.go:130] > runtime_root = "/run/runc"
	I1005 20:22:55.374516  427001 command_runner.go:130] > runtime_config_path = ""
	I1005 20:22:55.374521  427001 command_runner.go:130] > monitor_path = ""
	I1005 20:22:55.374532  427001 command_runner.go:130] > monitor_cgroup = ""
	I1005 20:22:55.374544  427001 command_runner.go:130] > monitor_exec_cgroup = ""
	I1005 20:22:55.374576  427001 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1005 20:22:55.374587  427001 command_runner.go:130] > # running containers
	I1005 20:22:55.374597  427001 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1005 20:22:55.374609  427001 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1005 20:22:55.374619  427001 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1005 20:22:55.374631  427001 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1005 20:22:55.374643  427001 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1005 20:22:55.374655  427001 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1005 20:22:55.374666  427001 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1005 20:22:55.374677  427001 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1005 20:22:55.374688  427001 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1005 20:22:55.374699  427001 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1005 20:22:55.374709  427001 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1005 20:22:55.374720  427001 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1005 20:22:55.374735  427001 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1005 20:22:55.374751  427001 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1005 20:22:55.374768  427001 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1005 20:22:55.374780  427001 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1005 20:22:55.374794  427001 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1005 20:22:55.374808  427001 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1005 20:22:55.374821  427001 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1005 20:22:55.374837  427001 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1005 20:22:55.374847  427001 command_runner.go:130] > # Example:
	I1005 20:22:55.374858  427001 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1005 20:22:55.374870  427001 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1005 20:22:55.374883  427001 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1005 20:22:55.374892  427001 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1005 20:22:55.374901  427001 command_runner.go:130] > # cpuset = 0
	I1005 20:22:55.374915  427001 command_runner.go:130] > # cpushares = "0-1"
	I1005 20:22:55.374925  427001 command_runner.go:130] > # Where:
	I1005 20:22:55.374938  427001 command_runner.go:130] > # The workload name is workload-type.
	I1005 20:22:55.374953  427001 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1005 20:22:55.374966  427001 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1005 20:22:55.374976  427001 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1005 20:22:55.374986  427001 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1005 20:22:55.374999  427001 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1005 20:22:55.375009  427001 command_runner.go:130] > # 
	I1005 20:22:55.375021  427001 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1005 20:22:55.375031  427001 command_runner.go:130] > #
	I1005 20:22:55.375044  427001 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1005 20:22:55.375058  427001 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1005 20:22:55.375087  427001 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1005 20:22:55.375100  427001 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1005 20:22:55.375114  427001 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1005 20:22:55.375124  427001 command_runner.go:130] > [crio.image]
	I1005 20:22:55.375139  427001 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1005 20:22:55.375149  427001 command_runner.go:130] > # default_transport = "docker://"
	I1005 20:22:55.375159  427001 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1005 20:22:55.375173  427001 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1005 20:22:55.375184  427001 command_runner.go:130] > # global_auth_file = ""
	I1005 20:22:55.375197  427001 command_runner.go:130] > # The image used to instantiate infra containers.
	I1005 20:22:55.375210  427001 command_runner.go:130] > # This option supports live configuration reload.
	I1005 20:22:55.375221  427001 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1005 20:22:55.375236  427001 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1005 20:22:55.375248  427001 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1005 20:22:55.375257  427001 command_runner.go:130] > # This option supports live configuration reload.
	I1005 20:22:55.375263  427001 command_runner.go:130] > # pause_image_auth_file = ""
	I1005 20:22:55.375277  427001 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1005 20:22:55.375291  427001 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1005 20:22:55.375305  427001 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1005 20:22:55.375318  427001 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1005 20:22:55.375330  427001 command_runner.go:130] > # pause_command = "/pause"
	I1005 20:22:55.375340  427001 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1005 20:22:55.375352  427001 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1005 20:22:55.375367  427001 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1005 20:22:55.375392  427001 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1005 20:22:55.375404  427001 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1005 20:22:55.375415  427001 command_runner.go:130] > # signature_policy = ""
	I1005 20:22:55.375425  427001 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1005 20:22:55.375436  427001 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1005 20:22:55.375447  427001 command_runner.go:130] > # changing them here.
	I1005 20:22:55.375458  427001 command_runner.go:130] > # insecure_registries = [
	I1005 20:22:55.375464  427001 command_runner.go:130] > # ]
	I1005 20:22:55.375479  427001 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1005 20:22:55.375491  427001 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1005 20:22:55.375502  427001 command_runner.go:130] > # image_volumes = "mkdir"
	I1005 20:22:55.375513  427001 command_runner.go:130] > # Temporary directory to use for storing big files
	I1005 20:22:55.375524  427001 command_runner.go:130] > # big_files_temporary_dir = ""
	I1005 20:22:55.375532  427001 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1005 20:22:55.375541  427001 command_runner.go:130] > # CNI plugins.
	I1005 20:22:55.375551  427001 command_runner.go:130] > [crio.network]
	I1005 20:22:55.375566  427001 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1005 20:22:55.375579  427001 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1005 20:22:55.375590  427001 command_runner.go:130] > # cni_default_network = ""
	I1005 20:22:55.375604  427001 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1005 20:22:55.375615  427001 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1005 20:22:55.375627  427001 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1005 20:22:55.375633  427001 command_runner.go:130] > # plugin_dirs = [
	I1005 20:22:55.375638  427001 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1005 20:22:55.375648  427001 command_runner.go:130] > # ]
	I1005 20:22:55.375661  427001 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1005 20:22:55.375672  427001 command_runner.go:130] > [crio.metrics]
	I1005 20:22:55.375683  427001 command_runner.go:130] > # Globally enable or disable metrics support.
	I1005 20:22:55.375694  427001 command_runner.go:130] > # enable_metrics = false
	I1005 20:22:55.375705  427001 command_runner.go:130] > # Specify enabled metrics collectors.
	I1005 20:22:55.375716  427001 command_runner.go:130] > # Per default all metrics are enabled.
	I1005 20:22:55.375728  427001 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1005 20:22:55.375742  427001 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1005 20:22:55.375756  427001 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1005 20:22:55.375767  427001 command_runner.go:130] > # metrics_collectors = [
	I1005 20:22:55.375778  427001 command_runner.go:130] > # 	"operations",
	I1005 20:22:55.375787  427001 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1005 20:22:55.375798  427001 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1005 20:22:55.375807  427001 command_runner.go:130] > # 	"operations_errors",
	I1005 20:22:55.375815  427001 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1005 20:22:55.375819  427001 command_runner.go:130] > # 	"image_pulls_by_name",
	I1005 20:22:55.375826  427001 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1005 20:22:55.375832  427001 command_runner.go:130] > # 	"image_pulls_failures",
	I1005 20:22:55.375838  427001 command_runner.go:130] > # 	"image_pulls_successes",
	I1005 20:22:55.375843  427001 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1005 20:22:55.375850  427001 command_runner.go:130] > # 	"image_layer_reuse",
	I1005 20:22:55.375854  427001 command_runner.go:130] > # 	"containers_oom_total",
	I1005 20:22:55.375861  427001 command_runner.go:130] > # 	"containers_oom",
	I1005 20:22:55.375865  427001 command_runner.go:130] > # 	"processes_defunct",
	I1005 20:22:55.375872  427001 command_runner.go:130] > # 	"operations_total",
	I1005 20:22:55.375877  427001 command_runner.go:130] > # 	"operations_latency_seconds",
	I1005 20:22:55.375884  427001 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1005 20:22:55.375889  427001 command_runner.go:130] > # 	"operations_errors_total",
	I1005 20:22:55.375895  427001 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1005 20:22:55.375900  427001 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1005 20:22:55.375910  427001 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1005 20:22:55.375915  427001 command_runner.go:130] > # 	"image_pulls_success_total",
	I1005 20:22:55.375922  427001 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1005 20:22:55.375926  427001 command_runner.go:130] > # 	"containers_oom_count_total",
	I1005 20:22:55.375932  427001 command_runner.go:130] > # ]
	I1005 20:22:55.375937  427001 command_runner.go:130] > # The port on which the metrics server will listen.
	I1005 20:22:55.375945  427001 command_runner.go:130] > # metrics_port = 9090
	I1005 20:22:55.375950  427001 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1005 20:22:55.375957  427001 command_runner.go:130] > # metrics_socket = ""
	I1005 20:22:55.375963  427001 command_runner.go:130] > # The certificate for the secure metrics server.
	I1005 20:22:55.375971  427001 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1005 20:22:55.375979  427001 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1005 20:22:55.375984  427001 command_runner.go:130] > # certificate on any modification event.
	I1005 20:22:55.375991  427001 command_runner.go:130] > # metrics_cert = ""
	I1005 20:22:55.375996  427001 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1005 20:22:55.376004  427001 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1005 20:22:55.376011  427001 command_runner.go:130] > # metrics_key = ""
	I1005 20:22:55.376016  427001 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1005 20:22:55.376023  427001 command_runner.go:130] > [crio.tracing]
	I1005 20:22:55.376029  427001 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1005 20:22:55.376035  427001 command_runner.go:130] > # enable_tracing = false
	I1005 20:22:55.376040  427001 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1005 20:22:55.376047  427001 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1005 20:22:55.376053  427001 command_runner.go:130] > # Number of samples to collect per million spans.
	I1005 20:22:55.376059  427001 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1005 20:22:55.376066  427001 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1005 20:22:55.376072  427001 command_runner.go:130] > [crio.stats]
	I1005 20:22:55.376078  427001 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1005 20:22:55.376086  427001 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1005 20:22:55.376092  427001 command_runner.go:130] > # stats_collection_period = 0
	I1005 20:22:55.376135  427001 command_runner.go:130] ! time="2023-10-05 20:22:55.367997196Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1005 20:22:55.376149  427001 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1005 20:22:55.376212  427001 cni.go:84] Creating CNI manager for ""
	I1005 20:22:55.376222  427001 cni.go:136] 2 nodes found, recommending kindnet
	I1005 20:22:55.376236  427001 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 20:22:55.376268  427001 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-401792 NodeName:multinode-401792-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 20:22:55.376404  427001 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-401792-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 20:22:55.376485  427001 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-401792-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-401792 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 20:22:55.376552  427001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 20:22:55.386088  427001 command_runner.go:130] > kubeadm
	I1005 20:22:55.386117  427001 command_runner.go:130] > kubectl
	I1005 20:22:55.386124  427001 command_runner.go:130] > kubelet
	I1005 20:22:55.386154  427001 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 20:22:55.386222  427001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1005 20:22:55.395057  427001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1005 20:22:55.413206  427001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 20:22:55.431655  427001 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1005 20:22:55.435217  427001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:22:55.447555  427001 host.go:66] Checking if "multinode-401792" exists ...
	I1005 20:22:55.447833  427001 config.go:182] Loaded profile config "multinode-401792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:22:55.447813  427001 start.go:304] JoinCluster: &{Name:multinode-401792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-401792 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:22:55.447898  427001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1005 20:22:55.447955  427001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:22:55.465625  427001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa Username:docker}
	I1005 20:22:55.617698  427001 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token nczhmo.h2taf3hp37dazenn --discovery-token-ca-cert-hash sha256:af54c40b34df9aa62a3cf1403ac0941464ca2ce3fa61291d1928dbb7869129bb 
	I1005 20:22:55.617777  427001 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1005 20:22:55.617827  427001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nczhmo.h2taf3hp37dazenn --discovery-token-ca-cert-hash sha256:af54c40b34df9aa62a3cf1403ac0941464ca2ce3fa61291d1928dbb7869129bb --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-401792-m02"
	I1005 20:22:55.653819  427001 command_runner.go:130] > [preflight] Running pre-flight checks
	I1005 20:22:55.682802  427001 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1005 20:22:55.682875  427001 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1044-gcp
	I1005 20:22:55.682889  427001 command_runner.go:130] > OS: Linux
	I1005 20:22:55.682898  427001 command_runner.go:130] > CGROUPS_CPU: enabled
	I1005 20:22:55.682911  427001 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1005 20:22:55.682923  427001 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1005 20:22:55.682945  427001 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1005 20:22:55.682955  427001 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1005 20:22:55.682967  427001 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1005 20:22:55.682985  427001 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1005 20:22:55.682998  427001 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1005 20:22:55.683013  427001 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1005 20:22:55.769319  427001 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1005 20:22:55.769352  427001 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1005 20:22:55.798934  427001 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 20:22:55.798962  427001 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 20:22:55.798969  427001 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1005 20:22:55.879235  427001 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1005 20:22:57.892068  427001 command_runner.go:130] > This node has joined the cluster:
	I1005 20:22:57.892103  427001 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1005 20:22:57.892112  427001 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1005 20:22:57.892129  427001 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1005 20:22:57.894909  427001 command_runner.go:130] ! W1005 20:22:55.653309    1107 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1005 20:22:57.894950  427001 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-gcp\n", err: exit status 1
	I1005 20:22:57.894970  427001 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 20:22:57.894998  427001 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nczhmo.h2taf3hp37dazenn --discovery-token-ca-cert-hash sha256:af54c40b34df9aa62a3cf1403ac0941464ca2ce3fa61291d1928dbb7869129bb --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-401792-m02": (2.277151176s)
	I1005 20:22:57.895021  427001 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1005 20:22:58.054834  427001 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1005 20:22:58.054868  427001 start.go:306] JoinCluster complete in 2.607054139s
	I1005 20:22:58.054883  427001 cni.go:84] Creating CNI manager for ""
	I1005 20:22:58.054891  427001 cni.go:136] 2 nodes found, recommending kindnet
	I1005 20:22:58.054937  427001 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 20:22:58.058597  427001 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1005 20:22:58.058632  427001 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1005 20:22:58.058643  427001 command_runner.go:130] > Device: 35h/53d	Inode: 1303299     Links: 1
	I1005 20:22:58.058654  427001 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 20:22:58.058664  427001 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1005 20:22:58.058677  427001 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1005 20:22:58.058687  427001 command_runner.go:130] > Change: 2023-10-05 20:03:13.391912165 +0000
	I1005 20:22:58.058700  427001 command_runner.go:130] >  Birth: 2023-10-05 20:03:13.367909862 +0000
	I1005 20:22:58.058760  427001 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1005 20:22:58.058772  427001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 20:22:58.076030  427001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 20:22:58.291355  427001 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1005 20:22:58.295337  427001 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1005 20:22:58.298395  427001 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1005 20:22:58.310651  427001 command_runner.go:130] > daemonset.apps/kindnet configured
	I1005 20:22:58.315403  427001 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:22:58.315680  427001 kapi.go:59] client config for multinode-401792: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.key", CAFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bfbf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 20:22:58.316049  427001 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1005 20:22:58.316063  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:58.316071  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:58.316077  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:58.318713  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:58.318735  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:58.318746  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:58.318754  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:58.318763  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:58.318775  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:58.318786  427001 round_trippers.go:580]     Content-Length: 291
	I1005 20:22:58.318794  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:58 GMT
	I1005 20:22:58.318802  427001 round_trippers.go:580]     Audit-Id: 9abd098a-c96c-4b19-9d57-db9ae35f7395
	I1005 20:22:58.318830  427001 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4c04a5db-f258-4e51-aa96-1b09daef1dd4","resourceVersion":"447","creationTimestamp":"2023-10-05T20:21:55Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1005 20:22:58.318943  427001 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-401792" context rescaled to 1 replicas
	I1005 20:22:58.318977  427001 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1005 20:22:58.321615  427001 out.go:177] * Verifying Kubernetes components...
	I1005 20:22:58.322986  427001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:22:58.335053  427001 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:22:58.335311  427001 kapi.go:59] client config for multinode-401792: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/profiles/multinode-401792/client.key", CAFile:"/home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bfbf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 20:22:58.335567  427001 node_ready.go:35] waiting up to 6m0s for node "multinode-401792-m02" to be "Ready" ...
	I1005 20:22:58.335634  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:22:58.335642  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:58.335650  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:58.335658  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:58.337904  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:58.337926  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:58.337936  427001 round_trippers.go:580]     Audit-Id: 2f5d6809-4e33-4700-8e16-ca2b57ed3250
	I1005 20:22:58.337944  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:58.337953  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:58.337961  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:58.337969  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:58.337985  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:58 GMT
	I1005 20:22:58.338126  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"484","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1005 20:22:58.338501  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:22:58.338514  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:58.338521  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:58.338527  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:58.340637  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:58.340657  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:58.340664  427001 round_trippers.go:580]     Audit-Id: 9d77664d-ebdf-414a-aeb8-aea8f9b78f54
	I1005 20:22:58.340670  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:58.340678  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:58.340687  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:58.340694  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:58.340704  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:58 GMT
	I1005 20:22:58.340802  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"484","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1005 20:22:58.841887  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:22:58.841915  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:58.841923  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:58.841930  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:58.844369  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:58.844391  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:58.844400  427001 round_trippers.go:580]     Audit-Id: c52cdffb-ba7a-4b55-9078-8ba6e7a71b58
	I1005 20:22:58.844410  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:58.844418  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:58.844427  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:58.844435  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:58.844446  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:58 GMT
	I1005 20:22:58.844568  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"484","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1005 20:22:59.341811  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:22:59.341833  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:59.341846  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:59.341852  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:59.344063  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:59.344089  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:59.344101  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:59.344110  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:59.344119  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:59.344125  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:59 GMT
	I1005 20:22:59.344131  427001 round_trippers.go:580]     Audit-Id: 5374e3e2-1b06-4af4-96a9-10ea31e3f535
	I1005 20:22:59.344136  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:59.344278  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"484","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1005 20:22:59.841698  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:22:59.841720  427001 round_trippers.go:469] Request Headers:
	I1005 20:22:59.841728  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:22:59.841734  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:22:59.844207  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:22:59.844228  427001 round_trippers.go:577] Response Headers:
	I1005 20:22:59.844235  427001 round_trippers.go:580]     Audit-Id: 28c393b8-35d9-4862-a0d4-c4b3ccdbec43
	I1005 20:22:59.844241  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:22:59.844246  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:22:59.844251  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:22:59.844257  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:22:59.844265  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:22:59 GMT
	I1005 20:22:59.844386  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"484","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1005 20:23:00.341702  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:00.341728  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:00.341736  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:00.341742  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:00.344206  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:00.344232  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:00.344243  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:00.344251  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:00.344259  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:00.344268  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:00 GMT
	I1005 20:23:00.344277  427001 round_trippers.go:580]     Audit-Id: 88a10328-ca43-42c3-b22b-192630b28a35
	I1005 20:23:00.344285  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:00.344385  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"484","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1005 20:23:00.344670  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:00.842076  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:00.842101  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:00.842109  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:00.842115  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:00.844488  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:00.844508  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:00.844516  427001 round_trippers.go:580]     Audit-Id: f321d92e-b158-4340-b7c8-69b137d1365e
	I1005 20:23:00.844521  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:00.844526  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:00.844532  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:00.844537  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:00.844542  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:00 GMT
	I1005 20:23:00.844750  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"484","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1005 20:23:01.341357  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:01.341383  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:01.341392  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:01.341398  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:01.343905  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:01.343933  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:01.343944  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:01 GMT
	I1005 20:23:01.343952  427001 round_trippers.go:580]     Audit-Id: 72e30a51-80b1-4b15-9668-5cc21d2f61b2
	I1005 20:23:01.343960  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:01.343971  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:01.343979  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:01.343990  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:01.344088  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"484","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1005 20:23:01.841689  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:01.841720  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:01.841734  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:01.841744  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:01.844152  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:01.844174  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:01.844181  427001 round_trippers.go:580]     Audit-Id: 43a48c9a-4b04-4de5-8ce6-2b2af82e7295
	I1005 20:23:01.844188  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:01.844193  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:01.844198  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:01.844204  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:01.844212  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:01 GMT
	I1005 20:23:01.844390  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"484","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1005 20:23:02.341424  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:02.341447  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:02.341456  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:02.341462  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:02.343942  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:02.343970  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:02.343982  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:02.343989  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:02 GMT
	I1005 20:23:02.343998  427001 round_trippers.go:580]     Audit-Id: 5afe5986-6fa3-44f0-ae90-d1fe5aa41956
	I1005 20:23:02.344010  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:02.344020  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:02.344027  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:02.344170  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"484","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1005 20:23:02.842004  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:02.842031  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:02.842039  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:02.842045  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:02.844436  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:02.844464  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:02.844473  427001 round_trippers.go:580]     Audit-Id: a4fb5d47-b37c-4f1b-b700-fe10a1e08d06
	I1005 20:23:02.844479  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:02.844484  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:02.844489  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:02.844493  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:02.844508  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:02 GMT
	I1005 20:23:02.844622  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"503","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1005 20:23:02.845037  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:03.341237  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:03.341260  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:03.341268  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:03.341275  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:03.343550  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:03.343569  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:03.343576  427001 round_trippers.go:580]     Audit-Id: cf5ca3e5-ab31-4021-9861-55087b15833c
	I1005 20:23:03.343581  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:03.343586  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:03.343591  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:03.343596  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:03.343601  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:03 GMT
	I1005 20:23:03.343733  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"503","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1005 20:23:03.841326  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:03.841351  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:03.841383  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:03.841391  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:03.843779  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:03.843802  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:03.843809  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:03 GMT
	I1005 20:23:03.843815  427001 round_trippers.go:580]     Audit-Id: 706f9097-79a4-493e-bf30-82c6e241bfb5
	I1005 20:23:03.843820  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:03.843831  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:03.843842  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:03.843849  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:03.843955  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"503","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1005 20:23:04.341547  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:04.341570  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:04.341580  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:04.341585  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:04.344064  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:04.344084  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:04.344094  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:04.344103  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:04.344111  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:04.344121  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:04.344130  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:04 GMT
	I1005 20:23:04.344138  427001 round_trippers.go:580]     Audit-Id: 61fe7fdb-b087-40a4-9784-54ac95f55635
	I1005 20:23:04.344351  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"503","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1005 20:23:04.841915  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:04.841966  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:04.841975  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:04.841981  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:04.844264  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:04.844287  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:04.844294  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:04.844299  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:04.844304  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:04 GMT
	I1005 20:23:04.844309  427001 round_trippers.go:580]     Audit-Id: 3d1f903d-146b-4a5c-900e-27d2017200d4
	I1005 20:23:04.844319  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:04.844325  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:04.844445  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"503","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1005 20:23:05.342179  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:05.342200  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:05.342209  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:05.342215  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:05.344625  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:05.344651  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:05.344662  427001 round_trippers.go:580]     Audit-Id: 9e6c9964-4da7-4441-bd91-58af491d3628
	I1005 20:23:05.344671  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:05.344680  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:05.344689  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:05.344695  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:05.344703  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:05 GMT
	I1005 20:23:05.344842  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"503","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1005 20:23:05.345255  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:05.841432  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:05.841460  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:05.841471  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:05.841478  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:05.843998  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:05.844029  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:05.844040  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:05.844049  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:05 GMT
	I1005 20:23:05.844058  427001 round_trippers.go:580]     Audit-Id: 85811ff0-a3c6-4ef7-b6c0-571ab48620b3
	I1005 20:23:05.844065  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:05.844073  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:05.844080  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:05.844204  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"503","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1005 20:23:06.341847  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:06.341869  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:06.341881  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:06.341895  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:06.344556  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:06.344580  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:06.344587  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:06.344592  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:06.344600  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:06.344608  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:06.344617  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:06 GMT
	I1005 20:23:06.344625  427001 round_trippers.go:580]     Audit-Id: 871d0ee6-e117-4406-8232-ea334f074741
	I1005 20:23:06.344820  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"503","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1005 20:23:06.841414  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:06.841438  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:06.841447  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:06.841453  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:06.843932  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:06.843956  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:06.843965  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:06.843974  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:06.843982  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:06 GMT
	I1005 20:23:06.843990  427001 round_trippers.go:580]     Audit-Id: 27c617b6-575d-49c6-aa72-e0b588720acd
	I1005 20:23:06.843997  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:06.844006  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:06.844121  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"503","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1005 20:23:07.341857  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:07.341889  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:07.341900  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:07.341908  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:07.344452  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:07.344482  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:07.344493  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:07.344502  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:07 GMT
	I1005 20:23:07.344509  427001 round_trippers.go:580]     Audit-Id: 1b4144ad-c3c5-436a-8539-a85721ed7d58
	I1005 20:23:07.344516  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:07.344523  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:07.344531  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:07.344697  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"503","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1005 20:23:07.842307  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:07.842333  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:07.842342  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:07.842348  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:07.844915  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:07.844938  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:07.844946  427001 round_trippers.go:580]     Audit-Id: 40a9233b-e04e-4c41-bd48-eac70a5d8da9
	I1005 20:23:07.844952  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:07.844957  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:07.844962  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:07.844970  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:07.844978  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:07 GMT
	I1005 20:23:07.845114  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"503","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1005 20:23:07.845430  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:08.342287  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:08.342308  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:08.342316  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:08.342324  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:08.344798  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:08.344824  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:08.344836  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:08.344845  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:08.344854  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:08.344861  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:08.344888  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:08 GMT
	I1005 20:23:08.344902  427001 round_trippers.go:580]     Audit-Id: 86bd7d05-b129-4ae8-bd37-82553a1074ab
	I1005 20:23:08.345040  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:08.841610  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:08.841641  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:08.841654  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:08.841664  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:08.844098  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:08.844125  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:08.844135  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:08.844143  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:08 GMT
	I1005 20:23:08.844152  427001 round_trippers.go:580]     Audit-Id: c57cc63c-fb2f-4d0a-b659-bc468c878567
	I1005 20:23:08.844161  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:08.844170  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:08.844181  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:08.844339  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:09.341858  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:09.341891  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:09.341903  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:09.341910  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:09.344558  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:09.344582  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:09.344593  427001 round_trippers.go:580]     Audit-Id: 7522dbe5-92a4-4a78-87c1-0050ad8dc411
	I1005 20:23:09.344603  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:09.344612  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:09.344623  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:09.344636  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:09.344645  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:09 GMT
	I1005 20:23:09.344825  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:09.841308  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:09.841334  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:09.841343  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:09.841349  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:09.843798  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:09.843820  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:09.843828  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:09.843833  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:09.843838  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:09.843843  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:09.843851  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:09 GMT
	I1005 20:23:09.843857  427001 round_trippers.go:580]     Audit-Id: 71d98c66-4bc3-463e-a2b4-7e1d3dcf6578
	I1005 20:23:09.844006  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:10.342286  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:10.342310  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:10.342319  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:10.342325  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:10.345018  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:10.345045  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:10.345055  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:10.345063  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:10 GMT
	I1005 20:23:10.345070  427001 round_trippers.go:580]     Audit-Id: 6d705671-bec1-4651-b40a-fa7b0aab12f5
	I1005 20:23:10.345077  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:10.345084  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:10.345106  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:10.345261  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:10.345626  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:10.841423  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:10.841446  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:10.841454  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:10.841468  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:10.843888  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:10.843915  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:10.843923  427001 round_trippers.go:580]     Audit-Id: 11e627ed-b8b4-439d-8def-93fe9bcfd6a1
	I1005 20:23:10.843929  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:10.843934  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:10.843942  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:10.843950  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:10.843961  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:10 GMT
	I1005 20:23:10.844091  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:11.341748  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:11.341781  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:11.341790  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:11.341796  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:11.344320  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:11.344358  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:11.344371  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:11.344388  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:11.344402  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:11 GMT
	I1005 20:23:11.344412  427001 round_trippers.go:580]     Audit-Id: ec7f652b-c2fa-4ec9-a001-a6d83c06e25a
	I1005 20:23:11.344427  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:11.344441  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:11.344619  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:11.842203  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:11.842229  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:11.842238  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:11.842244  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:11.844691  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:11.844726  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:11.844737  427001 round_trippers.go:580]     Audit-Id: 6aabda74-704b-41c7-9e7e-8bc5d23ad9aa
	I1005 20:23:11.844746  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:11.844755  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:11.844762  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:11.844769  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:11.844782  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:11 GMT
	I1005 20:23:11.844944  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:12.341860  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:12.341886  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:12.341898  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:12.341904  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:12.344452  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:12.344479  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:12.344490  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:12.344500  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:12 GMT
	I1005 20:23:12.344507  427001 round_trippers.go:580]     Audit-Id: b8d44da3-5f20-4b68-9eec-d67216cfbfa1
	I1005 20:23:12.344515  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:12.344523  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:12.344535  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:12.344697  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:12.841231  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:12.841253  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:12.841262  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:12.841268  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:12.843669  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:12.843694  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:12.843702  427001 round_trippers.go:580]     Audit-Id: 107ba0ab-8b1e-471b-8561-fde8e16f5b4b
	I1005 20:23:12.843710  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:12.843718  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:12.843729  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:12.843737  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:12.843746  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:12 GMT
	I1005 20:23:12.843894  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:12.844223  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:13.341481  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:13.341506  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:13.341515  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:13.341521  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:13.343884  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:13.343908  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:13.343916  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:13 GMT
	I1005 20:23:13.343921  427001 round_trippers.go:580]     Audit-Id: 5f455b2b-43cb-40d1-8176-ad8ae8a81a28
	I1005 20:23:13.343926  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:13.343931  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:13.343936  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:13.343941  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:13.344072  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:13.841344  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:13.841373  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:13.841382  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:13.841388  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:13.843714  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:13.843744  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:13.843754  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:13 GMT
	I1005 20:23:13.843762  427001 round_trippers.go:580]     Audit-Id: 1ea6bd84-f5a4-4d22-8ded-cad5a251c8f1
	I1005 20:23:13.843769  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:13.843777  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:13.843793  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:13.843801  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:13.843923  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:14.341528  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:14.341555  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:14.341564  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:14.341570  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:14.344212  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:14.344240  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:14.344248  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:14.344254  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:14 GMT
	I1005 20:23:14.344259  427001 round_trippers.go:580]     Audit-Id: e93c95aa-ddc1-446d-92fc-64c75ff640ba
	I1005 20:23:14.344264  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:14.344270  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:14.344275  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:14.344378  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:14.841968  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:14.841995  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:14.842004  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:14.842010  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:14.844539  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:14.844560  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:14.844567  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:14 GMT
	I1005 20:23:14.844573  427001 round_trippers.go:580]     Audit-Id: 93288cab-03da-462a-bcfb-8d7469de2e11
	I1005 20:23:14.844580  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:14.844585  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:14.844590  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:14.844596  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:14.844690  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:14.844987  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:15.341374  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:15.341403  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:15.341414  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:15.341422  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:15.343876  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:15.343909  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:15.343920  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:15.343929  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:15.343936  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:15.343942  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:15.343947  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:15 GMT
	I1005 20:23:15.343952  427001 round_trippers.go:580]     Audit-Id: e52040bb-cd0a-49cf-98e2-b4cdb4458792
	I1005 20:23:15.344077  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:15.841696  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:15.841723  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:15.841732  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:15.841737  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:15.844219  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:15.844241  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:15.844248  427001 round_trippers.go:580]     Audit-Id: 5fa3b852-c53a-48be-a589-bb419e7157e0
	I1005 20:23:15.844254  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:15.844259  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:15.844265  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:15.844270  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:15.844275  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:15 GMT
	I1005 20:23:15.844420  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:16.342084  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:16.342107  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:16.342116  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:16.342124  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:16.344614  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:16.344641  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:16.344652  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:16.344658  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:16.344664  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:16.344669  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:16.344675  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:16 GMT
	I1005 20:23:16.344680  427001 round_trippers.go:580]     Audit-Id: 9a800c02-fee0-4922-a3ca-b32c242cf2f7
	I1005 20:23:16.344853  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:16.841361  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:16.841388  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:16.841396  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:16.841403  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:16.843905  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:16.843928  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:16.843935  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:16.843941  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:16.843946  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:16.843951  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:16 GMT
	I1005 20:23:16.843960  427001 round_trippers.go:580]     Audit-Id: 0baf3a7d-511d-4328-a033-126ba6283be3
	I1005 20:23:16.843965  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:16.844064  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:17.341836  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:17.341861  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:17.341870  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:17.341876  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:17.344319  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:17.344339  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:17.344346  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:17.344352  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:17.344357  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:17.344362  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:17 GMT
	I1005 20:23:17.344368  427001 round_trippers.go:580]     Audit-Id: 440be5cf-5543-4c43-b4cc-ef03436d03fb
	I1005 20:23:17.344373  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:17.344528  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:17.344842  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:17.842332  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:17.842362  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:17.842376  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:17.842385  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:17.844971  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:17.845002  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:17.845012  427001 round_trippers.go:580]     Audit-Id: 0124c2e5-ab8d-4af2-b6af-9607e6b0b140
	I1005 20:23:17.845019  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:17.845024  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:17.845029  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:17.845034  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:17.845039  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:17 GMT
	I1005 20:23:17.845155  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:18.342042  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:18.342075  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:18.342084  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:18.342091  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:18.344516  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:18.344540  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:18.344548  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:18.344554  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:18 GMT
	I1005 20:23:18.344559  427001 round_trippers.go:580]     Audit-Id: 7ab7c401-ec86-477a-bf1f-17863bd95eaa
	I1005 20:23:18.344564  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:18.344569  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:18.344577  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:18.344709  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:18.841304  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:18.841334  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:18.841348  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:18.841356  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:18.843707  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:18.843728  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:18.843737  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:18.843742  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:18.843751  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:18.843757  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:18.843762  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:18 GMT
	I1005 20:23:18.843767  427001 round_trippers.go:580]     Audit-Id: 894d7ab4-5e19-459c-af47-f9965c079146
	I1005 20:23:18.843870  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:19.341358  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:19.341385  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:19.341393  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:19.341399  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:19.343946  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:19.343973  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:19.343980  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:19.343986  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:19 GMT
	I1005 20:23:19.343993  427001 round_trippers.go:580]     Audit-Id: e10ca46c-1957-40db-8c09-6786a2664311
	I1005 20:23:19.344001  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:19.344011  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:19.344020  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:19.344180  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:19.841734  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:19.841759  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:19.841768  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:19.841774  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:19.844275  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:19.844300  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:19.844308  427001 round_trippers.go:580]     Audit-Id: 682801bc-affb-4132-ac7d-8aad39d3fa66
	I1005 20:23:19.844315  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:19.844323  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:19.844331  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:19.844340  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:19.844352  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:19 GMT
	I1005 20:23:19.844457  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:19.844778  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:20.342175  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:20.342199  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:20.342208  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:20.342214  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:20.344765  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:20.344792  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:20.344805  427001 round_trippers.go:580]     Audit-Id: c75c2773-4184-438c-a547-5949c0f12c69
	I1005 20:23:20.344813  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:20.344822  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:20.344828  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:20.344837  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:20.344846  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:20 GMT
	I1005 20:23:20.345031  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:20.841731  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:20.841760  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:20.841771  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:20.841780  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:20.844280  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:20.844314  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:20.844325  427001 round_trippers.go:580]     Audit-Id: a19d69bc-2bd8-4d34-951d-913b8453172e
	I1005 20:23:20.844334  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:20.844353  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:20.844362  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:20.844369  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:20.844374  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:20 GMT
	I1005 20:23:20.844512  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:21.342157  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:21.342193  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:21.342202  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:21.342208  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:21.344639  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:21.344668  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:21.344681  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:21 GMT
	I1005 20:23:21.344690  427001 round_trippers.go:580]     Audit-Id: 945562f2-3722-4b4d-b0f8-30a84402c5a1
	I1005 20:23:21.344697  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:21.344705  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:21.344717  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:21.344729  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:21.344873  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:21.841373  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:21.841401  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:21.841410  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:21.841417  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:21.843910  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:21.843937  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:21.843948  427001 round_trippers.go:580]     Audit-Id: 50496c7e-8a82-4a2d-a375-1aaba14136f9
	I1005 20:23:21.843956  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:21.843964  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:21.843971  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:21.843979  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:21.843986  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:21 GMT
	I1005 20:23:21.844166  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:22.341785  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:22.341815  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:22.341823  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:22.341830  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:22.344246  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:22.344270  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:22.344280  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:22.344288  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:22 GMT
	I1005 20:23:22.344296  427001 round_trippers.go:580]     Audit-Id: 5d185343-cb23-47ec-8f9d-20b06f6d2db6
	I1005 20:23:22.344304  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:22.344311  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:22.344322  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:22.344446  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:22.344746  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:22.842108  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:22.842138  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:22.842147  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:22.842153  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:22.844970  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:22.844995  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:22.845006  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:22 GMT
	I1005 20:23:22.845013  427001 round_trippers.go:580]     Audit-Id: 9c46fa3c-b509-46c7-957d-d5aa1876818f
	I1005 20:23:22.845019  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:22.845028  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:22.845034  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:22.845042  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:22.845210  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:23.341870  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:23.341897  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:23.341906  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:23.341912  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:23.344276  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:23.344298  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:23.344309  427001 round_trippers.go:580]     Audit-Id: 78e06cfb-5759-4f45-ae1f-924febcf86fb
	I1005 20:23:23.344318  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:23.344325  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:23.344332  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:23.344339  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:23.344348  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:23 GMT
	I1005 20:23:23.344498  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:23.842142  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:23.842180  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:23.842192  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:23.842201  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:23.844673  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:23.844700  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:23.844711  427001 round_trippers.go:580]     Audit-Id: 2a0bf362-7665-449e-a1e5-e2b3e660fe6f
	I1005 20:23:23.844720  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:23.844729  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:23.844738  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:23.844745  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:23.844754  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:23 GMT
	I1005 20:23:23.844872  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:24.341498  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:24.341524  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:24.341533  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:24.341540  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:24.343906  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:24.343938  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:24.343949  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:24.343959  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:24.343968  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:24 GMT
	I1005 20:23:24.343977  427001 round_trippers.go:580]     Audit-Id: d7c0e712-ac49-4bfb-a2c6-f6df7147a05c
	I1005 20:23:24.343986  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:24.343999  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:24.344147  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:24.841694  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:24.841719  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:24.841728  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:24.841734  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:24.844159  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:24.844181  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:24.844189  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:24.844194  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:24.844199  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:24.844204  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:24 GMT
	I1005 20:23:24.844210  427001 round_trippers.go:580]     Audit-Id: 5a256167-90c0-4b1c-92d7-e9adeaa762c7
	I1005 20:23:24.844215  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:24.844320  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:24.844649  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:25.342042  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:25.342065  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:25.342075  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:25.342085  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:25.344400  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:25.344450  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:25.344462  427001 round_trippers.go:580]     Audit-Id: 4f9c87aa-1ff5-4946-890b-ec697107860c
	I1005 20:23:25.344472  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:25.344479  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:25.344487  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:25.344497  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:25.344511  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:25 GMT
	I1005 20:23:25.344641  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:25.842291  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:25.842330  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:25.842348  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:25.842356  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:25.844694  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:25.844715  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:25.844722  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:25.844728  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:25.844734  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:25.844740  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:25 GMT
	I1005 20:23:25.844748  427001 round_trippers.go:580]     Audit-Id: 343a4d08-cf07-4f9f-b888-f8afe8a0b367
	I1005 20:23:25.844756  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:25.844903  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:26.342241  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:26.342269  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:26.342282  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:26.342292  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:26.344731  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:26.344762  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:26.344773  427001 round_trippers.go:580]     Audit-Id: 5d872f8c-7841-4bdc-8988-eb61e6ce7cd8
	I1005 20:23:26.344781  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:26.344789  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:26.344797  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:26.344806  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:26.344818  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:26 GMT
	I1005 20:23:26.344981  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:26.841588  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:26.841619  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:26.841630  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:26.841639  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:26.843940  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:26.843970  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:26.843981  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:26 GMT
	I1005 20:23:26.843990  427001 round_trippers.go:580]     Audit-Id: e7236d65-c910-421b-9b43-987f2b79f4d9
	I1005 20:23:26.843997  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:26.844006  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:26.844015  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:26.844022  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:26.844128  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:27.341892  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:27.341920  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:27.341929  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:27.341938  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:27.344245  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:27.344268  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:27.344278  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:27 GMT
	I1005 20:23:27.344288  427001 round_trippers.go:580]     Audit-Id: de5c13ab-48ed-46da-a93b-724f9a5fe64f
	I1005 20:23:27.344297  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:27.344304  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:27.344316  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:27.344324  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:27.344430  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:27.344730  427001 node_ready.go:58] node "multinode-401792-m02" has status "Ready":"False"
	I1005 20:23:27.842120  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:27.842146  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:27.842155  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:27.842162  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:27.844515  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:27.844540  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:27.844550  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:27.844559  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:27.844566  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:27.844573  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:27.844581  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:27 GMT
	I1005 20:23:27.844593  427001 round_trippers.go:580]     Audit-Id: 106410ea-01a3-4313-97df-a06564f5ac3d
	I1005 20:23:27.844698  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:28.341373  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:28.341399  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:28.341408  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:28.341414  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:28.343838  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:28.343864  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:28.343875  427001 round_trippers.go:580]     Audit-Id: 590c95e2-0b37-4dd7-9045-6e0ddc709f89
	I1005 20:23:28.343884  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:28.343893  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:28.343902  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:28.343909  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:28.343916  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:28 GMT
	I1005 20:23:28.344111  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:28.841640  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:28.841666  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:28.841675  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:28.841686  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:28.844069  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:28.844096  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:28.844111  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:28 GMT
	I1005 20:23:28.844119  427001 round_trippers.go:580]     Audit-Id: a9ec034d-14bf-4583-b02f-07cc5e718bf5
	I1005 20:23:28.844126  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:28.844134  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:28.844142  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:28.844154  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:28.844253  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:29.341914  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:29.341940  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:29.341950  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:29.341956  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:29.344175  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:29.344199  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:29.344206  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:29.344212  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:29.344218  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:29.344223  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:29.344232  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:29 GMT
	I1005 20:23:29.344240  427001 round_trippers.go:580]     Audit-Id: f6dfe84a-1447-4e01-968f-d18282c244c5
	I1005 20:23:29.344373  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"509","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1005 20:23:29.842069  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:29.842094  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:29.842103  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:29.842108  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:29.844477  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:29.844505  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:29.844514  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:29.844523  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:29 GMT
	I1005 20:23:29.844530  427001 round_trippers.go:580]     Audit-Id: 6877f4bd-c0be-411b-a761-5565de0ce0b9
	I1005 20:23:29.844538  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:29.844547  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:29.844556  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:29.844669  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"532","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1005 20:23:29.845002  427001 node_ready.go:49] node "multinode-401792-m02" has status "Ready":"True"
	I1005 20:23:29.845020  427001 node_ready.go:38] duration metric: took 31.509438306s waiting for node "multinode-401792-m02" to be "Ready" ...
	I1005 20:23:29.845033  427001 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:23:29.845124  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 20:23:29.845133  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:29.845144  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:29.845154  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:29.850066  427001 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1005 20:23:29.850092  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:29.850099  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:29.850104  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:29.850109  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:29.850114  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:29 GMT
	I1005 20:23:29.850119  427001 round_trippers.go:580]     Audit-Id: e50d66db-0e36-43a5-a099-8f32ec8f2f69
	I1005 20:23:29.850124  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:29.850773  427001 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nctb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6db951fc-21de-44c9-9e94-cfe1ab7ac040","resourceVersion":"442","creationTimestamp":"2023-10-05T20:22:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"63fb560e-8695-4b28-89cf-d0b3759b9e96","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"63fb560e-8695-4b28-89cf-d0b3759b9e96\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1005 20:23:29.853106  427001 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nctb6" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:29.853217  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nctb6
	I1005 20:23:29.853228  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:29.853240  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:29.853251  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:29.855642  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:29.855663  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:29.855671  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:29.855679  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:29 GMT
	I1005 20:23:29.855690  427001 round_trippers.go:580]     Audit-Id: 3d159be7-7f7e-46b8-b437-f9cadaf364f8
	I1005 20:23:29.855698  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:29.855711  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:29.855724  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:29.855825  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nctb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6db951fc-21de-44c9-9e94-cfe1ab7ac040","resourceVersion":"442","creationTimestamp":"2023-10-05T20:22:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"63fb560e-8695-4b28-89cf-d0b3759b9e96","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"63fb560e-8695-4b28-89cf-d0b3759b9e96\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1005 20:23:29.856270  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:23:29.856285  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:29.856295  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:29.856303  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:29.858313  427001 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1005 20:23:29.858333  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:29.858343  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:29.858351  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:29 GMT
	I1005 20:23:29.858359  427001 round_trippers.go:580]     Audit-Id: c0fb0efe-cd99-48f4-bdbb-5816f109c911
	I1005 20:23:29.858368  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:29.858383  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:29.858402  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:29.858532  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:23:29.858944  427001 pod_ready.go:92] pod "coredns-5dd5756b68-nctb6" in "kube-system" namespace has status "Ready":"True"
	I1005 20:23:29.858963  427001 pod_ready.go:81] duration metric: took 5.825574ms waiting for pod "coredns-5dd5756b68-nctb6" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:29.858976  427001 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:29.859080  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-401792
	I1005 20:23:29.859090  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:29.859102  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:29.859111  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:29.861486  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:29.861506  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:29.861513  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:29 GMT
	I1005 20:23:29.861518  427001 round_trippers.go:580]     Audit-Id: 74503091-d354-4c75-85ee-47d2e73cde76
	I1005 20:23:29.861524  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:29.861529  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:29.861535  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:29.861543  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:29.861645  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-401792","namespace":"kube-system","uid":"44ad3fe4-b132-45ea-93d3-35a3740a12ea","resourceVersion":"317","creationTimestamp":"2023-10-05T20:21:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"d22baac935db4c8bd25db8b27c0e22ad","kubernetes.io/config.mirror":"d22baac935db4c8bd25db8b27c0e22ad","kubernetes.io/config.seen":"2023-10-05T20:21:55.839321424Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1005 20:23:29.862081  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:23:29.862095  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:29.862102  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:29.862113  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:29.864467  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:29.864487  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:29.864494  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:29 GMT
	I1005 20:23:29.864500  427001 round_trippers.go:580]     Audit-Id: 3bded84e-8191-401d-847c-c65c046b06ee
	I1005 20:23:29.864505  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:29.864510  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:29.864515  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:29.864520  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:29.864624  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:23:29.864920  427001 pod_ready.go:92] pod "etcd-multinode-401792" in "kube-system" namespace has status "Ready":"True"
	I1005 20:23:29.864934  427001 pod_ready.go:81] duration metric: took 5.942306ms waiting for pod "etcd-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:29.864949  427001 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:29.865005  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-401792
	I1005 20:23:29.865012  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:29.865020  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:29.865026  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:29.867128  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:29.867151  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:29.867161  427001 round_trippers.go:580]     Audit-Id: 41a15b6a-4cc5-4f4e-83c6-66a325df6df6
	I1005 20:23:29.867171  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:29.867180  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:29.867186  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:29.867191  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:29.867197  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:29 GMT
	I1005 20:23:29.867353  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-401792","namespace":"kube-system","uid":"8b0de222-02fa-4bd6-b82e-e4b5e09908ec","resourceVersion":"408","creationTimestamp":"2023-10-05T20:21:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"9d50d9a0bb9882a5f98fd50755a0d758","kubernetes.io/config.mirror":"9d50d9a0bb9882a5f98fd50755a0d758","kubernetes.io/config.seen":"2023-10-05T20:21:55.839327911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1005 20:23:29.867780  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:23:29.867793  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:29.867801  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:29.867807  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:29.869749  427001 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1005 20:23:29.869772  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:29.869782  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:29 GMT
	I1005 20:23:29.869792  427001 round_trippers.go:580]     Audit-Id: 2c83429e-f5f9-4b8f-8f96-1380b834af42
	I1005 20:23:29.869800  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:29.869809  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:29.869818  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:29.869826  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:29.869933  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:23:29.870264  427001 pod_ready.go:92] pod "kube-apiserver-multinode-401792" in "kube-system" namespace has status "Ready":"True"
	I1005 20:23:29.870279  427001 pod_ready.go:81] duration metric: took 5.321821ms waiting for pod "kube-apiserver-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:29.870289  427001 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:29.870346  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-401792
	I1005 20:23:29.870356  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:29.870363  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:29.870368  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:29.872333  427001 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1005 20:23:29.872349  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:29.872356  427001 round_trippers.go:580]     Audit-Id: 572936a3-77fe-4f8a-85a2-40cd8d2971e8
	I1005 20:23:29.872362  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:29.872367  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:29.872372  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:29.872378  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:29.872392  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:29 GMT
	I1005 20:23:29.872574  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-401792","namespace":"kube-system","uid":"99f9133a-d7c6-4415-9d5d-d215ed75bc7b","resourceVersion":"311","creationTimestamp":"2023-10-05T20:21:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f3ca30035e4a8deac072828f388eccd","kubernetes.io/config.mirror":"4f3ca30035e4a8deac072828f388eccd","kubernetes.io/config.seen":"2023-10-05T20:21:49.928194447Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1005 20:23:29.873142  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:23:29.873157  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:29.873168  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:29.873182  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:29.874985  427001 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1005 20:23:29.875008  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:29.875018  427001 round_trippers.go:580]     Audit-Id: d66b1b3b-3f10-4caf-ad8b-6650c29b6dce
	I1005 20:23:29.875027  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:29.875036  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:29.875045  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:29.875057  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:29.875083  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:29 GMT
	I1005 20:23:29.875204  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:23:29.875555  427001 pod_ready.go:92] pod "kube-controller-manager-multinode-401792" in "kube-system" namespace has status "Ready":"True"
	I1005 20:23:29.875571  427001 pod_ready.go:81] duration metric: took 5.274922ms waiting for pod "kube-controller-manager-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:29.875582  427001 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l9dpz" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:30.042973  427001 request.go:629] Waited for 167.320894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l9dpz
	I1005 20:23:30.043051  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l9dpz
	I1005 20:23:30.043056  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:30.043085  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:30.043095  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:30.045459  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:30.045481  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:30.045489  427001 round_trippers.go:580]     Audit-Id: 73deb014-2887-44e6-909c-632bd744f708
	I1005 20:23:30.045495  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:30.045502  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:30.045509  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:30.045517  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:30.045526  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:30 GMT
	I1005 20:23:30.045643  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-l9dpz","generateName":"kube-proxy-","namespace":"kube-system","uid":"386ee581-e207-45ad-a08c-86a0804a2233","resourceVersion":"409","creationTimestamp":"2023-10-05T20:22:08Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b6c80ab9-89a2-4cdd-af70-bbfa2d07f2c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6c80ab9-89a2-4cdd-af70-bbfa2d07f2c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1005 20:23:30.242506  427001 request.go:629] Waited for 196.352779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:23:30.242575  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:23:30.242582  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:30.242590  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:30.242599  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:30.245092  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:30.245124  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:30.245136  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:30.245146  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:30.245154  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:30 GMT
	I1005 20:23:30.245162  427001 round_trippers.go:580]     Audit-Id: 7ca8e7de-c3db-45e7-9ded-0f42d2ada2b0
	I1005 20:23:30.245169  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:30.245180  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:30.245316  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:23:30.245691  427001 pod_ready.go:92] pod "kube-proxy-l9dpz" in "kube-system" namespace has status "Ready":"True"
	I1005 20:23:30.245706  427001 pod_ready.go:81] duration metric: took 370.117882ms waiting for pod "kube-proxy-l9dpz" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:30.245719  427001 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xj76m" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:30.442685  427001 request.go:629] Waited for 196.86979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xj76m
	I1005 20:23:30.442775  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xj76m
	I1005 20:23:30.442788  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:30.442800  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:30.442811  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:30.445479  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:30.445499  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:30.445508  427001 round_trippers.go:580]     Audit-Id: a60e96dc-bcc8-4c9c-838b-6f73509e6828
	I1005 20:23:30.445513  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:30.445518  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:30.445524  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:30.445529  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:30.445534  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:30 GMT
	I1005 20:23:30.445730  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xj76m","generateName":"kube-proxy-","namespace":"kube-system","uid":"e4a3989d-6424-4ab0-ad9b-ddc96139b498","resourceVersion":"498","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b6c80ab9-89a2-4cdd-af70-bbfa2d07f2c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6c80ab9-89a2-4cdd-af70-bbfa2d07f2c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1005 20:23:30.642559  427001 request.go:629] Waited for 196.365442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:30.642645  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792-m02
	I1005 20:23:30.642650  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:30.642658  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:30.642667  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:30.645314  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:30.645336  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:30.645343  427001 round_trippers.go:580]     Audit-Id: 08b02121-8ff2-4d18-a2b1-53d638afe3d3
	I1005 20:23:30.645349  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:30.645354  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:30.645359  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:30.645365  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:30.645373  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:30 GMT
	I1005 20:23:30.645531  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792-m02","uid":"c3dd02f1-958b-4f89-b667-8ad7fe04ee4f","resourceVersion":"532","creationTimestamp":"2023-10-05T20:22:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1005 20:23:30.645868  427001 pod_ready.go:92] pod "kube-proxy-xj76m" in "kube-system" namespace has status "Ready":"True"
	I1005 20:23:30.645886  427001 pod_ready.go:81] duration metric: took 400.1557ms waiting for pod "kube-proxy-xj76m" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:30.645896  427001 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:30.842268  427001 request.go:629] Waited for 196.275142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-401792
	I1005 20:23:30.842334  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-401792
	I1005 20:23:30.842340  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:30.842350  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:30.842356  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:30.844812  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:30.844836  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:30.844843  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:30.844849  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:30 GMT
	I1005 20:23:30.844854  427001 round_trippers.go:580]     Audit-Id: c1f02e22-9ba3-4c96-b989-76665e753c6f
	I1005 20:23:30.844861  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:30.844868  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:30.844876  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:30.845060  427001 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-401792","namespace":"kube-system","uid":"33043544-61f1-4457-b66d-11bfdac4a024","resourceVersion":"314","creationTimestamp":"2023-10-05T20:21:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1f345bfdbc551b09bcec71a1eb70094b","kubernetes.io/config.mirror":"1f345bfdbc551b09bcec71a1eb70094b","kubernetes.io/config.seen":"2023-10-05T20:21:55.839330891Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T20:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1005 20:23:31.042860  427001 request.go:629] Waited for 197.371693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:23:31.042940  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-401792
	I1005 20:23:31.042945  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:31.042953  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:31.042960  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:31.045356  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:31.045379  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:31.045387  427001 round_trippers.go:580]     Audit-Id: 1234f954-abac-4107-bb83-2f59e6bf11ee
	I1005 20:23:31.045393  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:31.045401  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:31.045409  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:31.045417  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:31.045425  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:31 GMT
	I1005 20:23:31.045533  427001 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T20:21:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1005 20:23:31.045849  427001 pod_ready.go:92] pod "kube-scheduler-multinode-401792" in "kube-system" namespace has status "Ready":"True"
	I1005 20:23:31.045863  427001 pod_ready.go:81] duration metric: took 399.961717ms waiting for pod "kube-scheduler-multinode-401792" in "kube-system" namespace to be "Ready" ...
	I1005 20:23:31.045876  427001 pod_ready.go:38] duration metric: took 1.200823718s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:23:31.045894  427001 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 20:23:31.045948  427001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:23:31.057454  427001 system_svc.go:56] duration metric: took 11.549846ms WaitForService to wait for kubelet.
	I1005 20:23:31.057488  427001 kubeadm.go:581] duration metric: took 32.738474918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 20:23:31.057512  427001 node_conditions.go:102] verifying NodePressure condition ...
	I1005 20:23:31.242957  427001 request.go:629] Waited for 185.358602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1005 20:23:31.243036  427001 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1005 20:23:31.243043  427001 round_trippers.go:469] Request Headers:
	I1005 20:23:31.243058  427001 round_trippers.go:473]     Accept: application/json, */*
	I1005 20:23:31.243087  427001 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1005 20:23:31.245639  427001 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 20:23:31.245663  427001 round_trippers.go:577] Response Headers:
	I1005 20:23:31.245671  427001 round_trippers.go:580]     Content-Type: application/json
	I1005 20:23:31.245679  427001 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 545f9d67-8ccd-450b-9aa3-e5fe5d517a60
	I1005 20:23:31.245687  427001 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b76e637e-5aa3-40d1-8475-98a02159b936
	I1005 20:23:31.245695  427001 round_trippers.go:580]     Date: Thu, 05 Oct 2023 20:23:31 GMT
	I1005 20:23:31.245702  427001 round_trippers.go:580]     Audit-Id: b6a82b8a-4c98-4ec7-b340-ad4bffb340da
	I1005 20:23:31.245714  427001 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 20:23:31.245873  427001 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"multinode-401792","uid":"708b3ffe-bfc9-43b4-8c1d-93b01972f0b5","resourceVersion":"424","creationTimestamp":"2023-10-05T20:21:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-401792","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-401792","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T20_21_56_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I1005 20:23:31.246354  427001 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1005 20:23:31.246370  427001 node_conditions.go:123] node cpu capacity is 8
	I1005 20:23:31.246381  427001 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1005 20:23:31.246385  427001 node_conditions.go:123] node cpu capacity is 8
	I1005 20:23:31.246392  427001 node_conditions.go:105] duration metric: took 188.872897ms to run NodePressure ...
	I1005 20:23:31.246406  427001 start.go:228] waiting for startup goroutines ...
	I1005 20:23:31.246430  427001 start.go:242] writing updated cluster config ...
	I1005 20:23:31.246724  427001 ssh_runner.go:195] Run: rm -f paused
	I1005 20:23:31.296898  427001 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 20:23:31.300221  427001 out.go:177] * Done! kubectl is now configured to use "multinode-401792" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 05 20:22:40 multinode-401792 crio[957]: time="2023-10-05 20:22:40.183450905Z" level=info msg="Starting container: 6cd56aeba4f0bf7dcd6e3425781d72d03add669c947c03b93ef901f07f005b06" id=a94efeb7-a5e9-4d8d-91d3-69cc7cfc5176 name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 20:22:40 multinode-401792 crio[957]: time="2023-10-05 20:22:40.185753649Z" level=info msg="Created container 97429b02c128e1ee77080822d3fdbe15a7af83a7e94df3786920be5c9046441c: kube-system/coredns-5dd5756b68-nctb6/coredns" id=bc1393aa-dfec-4bdf-bc4c-a4f28ccd1951 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 20:22:40 multinode-401792 crio[957]: time="2023-10-05 20:22:40.186458614Z" level=info msg="Starting container: 97429b02c128e1ee77080822d3fdbe15a7af83a7e94df3786920be5c9046441c" id=781c2d3a-61e1-419c-9888-aa93d246b15c name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 20:22:40 multinode-401792 crio[957]: time="2023-10-05 20:22:40.220605817Z" level=info msg="Started container" PID=2336 containerID=97429b02c128e1ee77080822d3fdbe15a7af83a7e94df3786920be5c9046441c description=kube-system/coredns-5dd5756b68-nctb6/coredns id=781c2d3a-61e1-419c-9888-aa93d246b15c name=/runtime.v1.RuntimeService/StartContainer sandboxID=5f3868a0c8977f4801e05ddcd66ea5236fb9f9b83caac047f63acdafdf4fbe6a
	Oct 05 20:22:40 multinode-401792 crio[957]: time="2023-10-05 20:22:40.221097161Z" level=info msg="Started container" PID=2328 containerID=6cd56aeba4f0bf7dcd6e3425781d72d03add669c947c03b93ef901f07f005b06 description=kube-system/storage-provisioner/storage-provisioner id=a94efeb7-a5e9-4d8d-91d3-69cc7cfc5176 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9bcef1e5fdb7f041e756c515ed4d66637817f3b4051a32eea7c79fe4de5fef2d
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.287007562Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-zj2tk/POD" id=4a028729-4f58-4171-813d-4a3bf10a0e02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.287123674Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.301632388Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-zj2tk Namespace:default ID:3d4d5e7a3d4c09129f19f3ff465edc37eab41bf45d15ffaf7b429e8a668bc4f8 UID:e0c273f9-2a0f-40ef-acaa-677f0f07d19c NetNS:/var/run/netns/89990702-4eeb-4afc-afa3-7bf50a30b34c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.301683133Z" level=info msg="Adding pod default_busybox-5bc68d56bd-zj2tk to CNI network \"kindnet\" (type=ptp)"
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.310748734Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-zj2tk Namespace:default ID:3d4d5e7a3d4c09129f19f3ff465edc37eab41bf45d15ffaf7b429e8a668bc4f8 UID:e0c273f9-2a0f-40ef-acaa-677f0f07d19c NetNS:/var/run/netns/89990702-4eeb-4afc-afa3-7bf50a30b34c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.310895856Z" level=info msg="Checking pod default_busybox-5bc68d56bd-zj2tk for CNI network kindnet (type=ptp)"
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.350396018Z" level=info msg="Ran pod sandbox 3d4d5e7a3d4c09129f19f3ff465edc37eab41bf45d15ffaf7b429e8a668bc4f8 with infra container: default/busybox-5bc68d56bd-zj2tk/POD" id=4a028729-4f58-4171-813d-4a3bf10a0e02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.352345931Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=7a48bf64-66e1-4f4f-944a-9174efe30916 name=/runtime.v1.ImageService/ImageStatus
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.352640025Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=7a48bf64-66e1-4f4f-944a-9174efe30916 name=/runtime.v1.ImageService/ImageStatus
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.353542616Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=4bc50a83-08cc-4ebd-97dc-d55fc7563e94 name=/runtime.v1.ImageService/PullImage
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.356901583Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.520693525Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.950303634Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=4bc50a83-08cc-4ebd-97dc-d55fc7563e94 name=/runtime.v1.ImageService/PullImage
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.951296681Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=a45412e5-9795-4705-b5a8-0725acbca65b name=/runtime.v1.ImageService/ImageStatus
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.951883405Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a45412e5-9795-4705-b5a8-0725acbca65b name=/runtime.v1.ImageService/ImageStatus
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.952869513Z" level=info msg="Creating container: default/busybox-5bc68d56bd-zj2tk/busybox" id=78af802b-afaa-4dbe-9c00-b657dab73ad4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 20:23:32 multinode-401792 crio[957]: time="2023-10-05 20:23:32.952981204Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 05 20:23:33 multinode-401792 crio[957]: time="2023-10-05 20:23:33.026112392Z" level=info msg="Created container d4e521c1538ab446680b423dd86cd72b126b886f015759095bfc2698068046b7: default/busybox-5bc68d56bd-zj2tk/busybox" id=78af802b-afaa-4dbe-9c00-b657dab73ad4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 20:23:33 multinode-401792 crio[957]: time="2023-10-05 20:23:33.026771525Z" level=info msg="Starting container: d4e521c1538ab446680b423dd86cd72b126b886f015759095bfc2698068046b7" id=89b9c0d0-b78b-422d-a9be-190d075ac6e8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 20:23:33 multinode-401792 crio[957]: time="2023-10-05 20:23:33.035645922Z" level=info msg="Started container" PID=2511 containerID=d4e521c1538ab446680b423dd86cd72b126b886f015759095bfc2698068046b7 description=default/busybox-5bc68d56bd-zj2tk/busybox id=89b9c0d0-b78b-422d-a9be-190d075ac6e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d4d5e7a3d4c09129f19f3ff465edc37eab41bf45d15ffaf7b429e8a668bc4f8
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d4e521c1538ab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   3d4d5e7a3d4c0       busybox-5bc68d56bd-zj2tk
	97429b02c128e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      57 seconds ago       Running             coredns                   0                   5f3868a0c8977       coredns-5dd5756b68-nctb6
	6cd56aeba4f0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      57 seconds ago       Running             storage-provisioner       0                   9bcef1e5fdb7f       storage-provisioner
	06681d05240b3       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      About a minute ago   Running             kube-proxy                0                   ebea63ab0257c       kube-proxy-l9dpz
	c140b7130fc0a       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   3fc0eefe37eea       kindnet-fnck9
	d1f78a2e4e449       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      About a minute ago   Running             kube-scheduler            0                   94744d196fad6       kube-scheduler-multinode-401792
	897d8bc73ce62       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      About a minute ago   Running             kube-apiserver            0                   1dba89827d10b       kube-apiserver-multinode-401792
	c25db63db46f3       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      About a minute ago   Running             kube-controller-manager   0                   b39af9ed92766       kube-controller-manager-multinode-401792
	b25934a891a1e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   6bdff3879c5ba       etcd-multinode-401792
	
	* 
	* ==> coredns [97429b02c128e1ee77080822d3fdbe15a7af83a7e94df3786920be5c9046441c] <==
	* [INFO] 10.244.1.2:40132 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106076s
	[INFO] 10.244.0.3:36974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120882s
	[INFO] 10.244.0.3:53181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001309624s
	[INFO] 10.244.0.3:39428 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083474s
	[INFO] 10.244.0.3:38902 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000050755s
	[INFO] 10.244.0.3:49039 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001006938s
	[INFO] 10.244.0.3:50606 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000051916s
	[INFO] 10.244.0.3:59203 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077162s
	[INFO] 10.244.0.3:46167 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040785s
	[INFO] 10.244.1.2:47631 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144233s
	[INFO] 10.244.1.2:49975 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114188s
	[INFO] 10.244.1.2:48311 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063634s
	[INFO] 10.244.1.2:43607 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051747s
	[INFO] 10.244.0.3:59442 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123473s
	[INFO] 10.244.0.3:33785 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096049s
	[INFO] 10.244.0.3:49400 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005046s
	[INFO] 10.244.0.3:57473 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054225s
	[INFO] 10.244.1.2:42158 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015091s
	[INFO] 10.244.1.2:34818 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011867s
	[INFO] 10.244.1.2:51458 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135238s
	[INFO] 10.244.1.2:59259 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000098637s
	[INFO] 10.244.0.3:46417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103885s
	[INFO] 10.244.0.3:46152 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000084996s
	[INFO] 10.244.0.3:40283 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005224s
	[INFO] 10.244.0.3:54037 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000050299s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-401792
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-401792
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=multinode-401792
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T20_21_56_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 20:21:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-401792
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 20:23:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 20:22:39 +0000   Thu, 05 Oct 2023 20:21:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 20:22:39 +0000   Thu, 05 Oct 2023 20:21:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 20:22:39 +0000   Thu, 05 Oct 2023 20:21:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 20:22:39 +0000   Thu, 05 Oct 2023 20:22:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-401792
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3e4070778cb4f7e80fb533d92259edc
	  System UUID:                220880cd-5011-48f4-b136-d0968fcc0641
	  Boot ID:                    442b7abc-f6f6-4fc0-9fdb-d53241b6517a
	  Kernel Version:             5.15.0-1044-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-zj2tk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-nctb6                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     89s
	  kube-system                 etcd-multinode-401792                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         101s
	  kube-system                 kindnet-fnck9                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      89s
	  kube-system                 kube-apiserver-multinode-401792             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-controller-manager-multinode-401792    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-l9dpz                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-multinode-401792             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 87s                  kube-proxy       
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x8 over 108s)  kubelet          Node multinode-401792 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 108s)  kubelet          Node multinode-401792 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x8 over 108s)  kubelet          Node multinode-401792 status is now: NodeHasSufficientPID
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node multinode-401792 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node multinode-401792 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node multinode-401792 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s                  node-controller  Node multinode-401792 event: Registered Node multinode-401792 in Controller
	  Normal  NodeReady                58s                  kubelet          Node multinode-401792 status is now: NodeReady
	
	
	Name:               multinode-401792-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-401792-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 20:22:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-401792-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 20:23:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 20:23:29 +0000   Thu, 05 Oct 2023 20:22:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 20:23:29 +0000   Thu, 05 Oct 2023 20:22:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 20:23:29 +0000   Thu, 05 Oct 2023 20:22:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 20:23:29 +0000   Thu, 05 Oct 2023 20:23:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-401792-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 929fa86d23954b0a935b7c9394bfa5ec
	  System UUID:                0dd1902e-dc04-40b1-bea0-0e284464c136
	  Boot ID:                    442b7abc-f6f6-4fc0-9fdb-d53241b6517a
	  Kernel Version:             5.15.0-1044-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-bk8vz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-zhd4q               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      40s
	  kube-system                 kube-proxy-xj76m            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38s                kube-proxy       
	  Normal  NodeHasSufficientMemory  40s (x5 over 41s)  kubelet          Node multinode-401792-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x5 over 41s)  kubelet          Node multinode-401792-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x5 over 41s)  kubelet          Node multinode-401792-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           35s                node-controller  Node multinode-401792-m02 event: Registered Node multinode-401792-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-401792-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007365] FS-Cache: O-key=[8] 'b2a20f0200000000'
	[  +0.005044] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=0000000012ce25ef{9p.inode} n=00000000233f08a6
	[  +0.008756] FS-Cache: N-key=[8] 'b2a20f0200000000'
	[  +3.104090] FS-Cache: Duplicate cookie detected
	[  +0.004716] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006789] FS-Cache: O-cookie d=0000000046be6370{9P.session} n=00000000d6c24489
	[  +0.007543] FS-Cache: O-key=[10] '34323936363032393237'
	[  +0.005381] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006580] FS-Cache: N-cookie d=0000000046be6370{9P.session} n=000000001982c203
	[  +0.008913] FS-Cache: N-key=[10] '34323936363032393237'
	[Oct 5 20:14] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[  +1.030902] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[  +2.015780] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[  +4.063616] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[  +8.191210] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[ +16.126422] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	[Oct 5 20:15] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a 16 8b 98 6c c3 92 5e d5 22 a2 f2 08 00
	
	* 
	* ==> etcd [b25934a891a1e71f0da6fb036258760308a0cbf590f96788913d1ee25c0e453e] <==
	* {"level":"info","ts":"2023-10-05T20:21:50.748017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-10-05T20:21:50.748112Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-05T20:21:50.749204Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-05T20:21:50.749288Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-05T20:21:50.749786Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-05T20:21:50.749466Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-05T20:21:50.74951Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-05T20:21:51.038154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-05T20:21:51.038243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-05T20:21:51.038284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-05T20:21:51.038304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-05T20:21:51.038312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-05T20:21:51.038324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-05T20:21:51.038334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-05T20:21:51.039619Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-401792 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-05T20:21:51.039663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T20:21:51.039629Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T20:21:51.039667Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T20:21:51.039924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-05T20:21:51.039963Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-05T20:21:51.040513Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T20:21:51.040592Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T20:21:51.040617Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T20:21:51.041195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-05T20:21:51.041447Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  20:23:37 up  2:05,  0 users,  load average: 0.46, 1.38, 1.10
	Linux multinode-401792 5.15.0-1044-gcp #52~20.04.1-Ubuntu SMP Wed Sep 20 16:25:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [c140b7130fc0ab5fc182ad6932cbead02206285e9f1e3c6175dd462efef1b6e3] <==
	* I1005 20:22:09.322676       1 main.go:116] setting mtu 1500 for CNI 
	I1005 20:22:09.322689       1 main.go:146] kindnetd IP family: "ipv4"
	I1005 20:22:09.322706       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1005 20:22:39.551294       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1005 20:22:39.558946       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 20:22:39.558974       1 main.go:227] handling current node
	I1005 20:22:49.572528       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 20:22:49.572553       1 main.go:227] handling current node
	I1005 20:22:59.576877       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 20:22:59.576914       1 main.go:227] handling current node
	I1005 20:22:59.576924       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1005 20:22:59.576928       1 main.go:250] Node multinode-401792-m02 has CIDR [10.244.1.0/24] 
	I1005 20:22:59.577109       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1005 20:23:09.581150       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 20:23:09.581179       1 main.go:227] handling current node
	I1005 20:23:09.581191       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1005 20:23:09.581198       1 main.go:250] Node multinode-401792-m02 has CIDR [10.244.1.0/24] 
	I1005 20:23:19.591664       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 20:23:19.591692       1 main.go:227] handling current node
	I1005 20:23:19.591705       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1005 20:23:19.591712       1 main.go:250] Node multinode-401792-m02 has CIDR [10.244.1.0/24] 
	I1005 20:23:29.596670       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 20:23:29.596698       1 main.go:227] handling current node
	I1005 20:23:29.596709       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1005 20:23:29.596713       1 main.go:250] Node multinode-401792-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [897d8bc73ce62f20b2445fc5df49aa23adf8078198bcc7e9ae528cc6bbbf9c3d] <==
	* I1005 20:21:52.839308       1 controller.go:624] quota admission added evaluator for: namespaces
	I1005 20:21:52.843735       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1005 20:21:52.926513       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1005 20:21:52.931774       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1005 20:21:52.931870       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I1005 20:21:52.935419       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1005 20:21:52.935424       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1005 20:21:52.935856       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1005 20:21:53.134965       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1005 20:21:53.740540       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1005 20:21:53.744693       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1005 20:21:53.744718       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1005 20:21:54.264440       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1005 20:21:54.309015       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1005 20:21:54.450711       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1005 20:21:54.458209       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1005 20:21:54.459667       1 controller.go:624] quota admission added evaluator for: endpoints
	I1005 20:21:54.464555       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1005 20:21:54.850018       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1005 20:21:55.764999       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1005 20:21:55.777497       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1005 20:21:55.788784       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1005 20:22:08.329406       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1005 20:22:08.633910       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1005 20:22:08.633920       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [c25db63db46f3674c808c787da0df23e15013744a847923cb5235115d61e2095] <==
	* I1005 20:22:08.954727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.668µs"
	I1005 20:22:39.778772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="189.842µs"
	I1005 20:22:39.792938       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.264µs"
	I1005 20:22:41.093573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="137.188µs"
	I1005 20:22:41.111890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.103447ms"
	I1005 20:22:41.112099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.06µs"
	I1005 20:22:42.705521       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1005 20:22:57.781785       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-401792-m02\" does not exist"
	I1005 20:22:57.787577       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-401792-m02" podCIDRs=["10.244.1.0/24"]
	I1005 20:22:57.790735       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xj76m"
	I1005 20:22:57.790762       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zhd4q"
	I1005 20:23:02.708860       1 event.go:307] "Event occurred" object="multinode-401792-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-401792-m02 event: Registered Node multinode-401792-m02 in Controller"
	I1005 20:23:02.708892       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-401792-m02"
	I1005 20:23:29.429091       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-401792-m02"
	I1005 20:23:31.964323       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1005 20:23:31.971825       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-bk8vz"
	I1005 20:23:31.977979       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-zj2tk"
	I1005 20:23:31.983554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.355673ms"
	I1005 20:23:31.996117       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.500729ms"
	I1005 20:23:31.996222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.187µs"
	I1005 20:23:32.722174       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-bk8vz" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-bk8vz"
	I1005 20:23:33.196620       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.498665ms"
	I1005 20:23:33.196764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="95.254µs"
	I1005 20:23:33.397982       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.634104ms"
	I1005 20:23:33.398099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="67.744µs"
	
	* 
	* ==> kube-proxy [06681d05240b33fb722a1330e5677aa6bb1098879ab8f52e877a40b46875bab8] <==
	* I1005 20:22:09.525254       1 server_others.go:69] "Using iptables proxy"
	I1005 20:22:09.534898       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1005 20:22:09.722439       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1005 20:22:09.724926       1 server_others.go:152] "Using iptables Proxier"
	I1005 20:22:09.724962       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1005 20:22:09.724968       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1005 20:22:09.724998       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1005 20:22:09.725220       1 server.go:846] "Version info" version="v1.28.2"
	I1005 20:22:09.725238       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 20:22:09.725809       1 config.go:97] "Starting endpoint slice config controller"
	I1005 20:22:09.725845       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1005 20:22:09.725857       1 config.go:315] "Starting node config controller"
	I1005 20:22:09.725872       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1005 20:22:09.725904       1 config.go:188] "Starting service config controller"
	I1005 20:22:09.725917       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1005 20:22:09.826348       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1005 20:22:09.826352       1 shared_informer.go:318] Caches are synced for service config
	I1005 20:22:09.826408       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d1f78a2e4e449de3f787762cb2be5fa649c78064f577e84c79737f1388cd7a1b] <==
	* W1005 20:21:52.941699       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 20:21:52.942440       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1005 20:21:52.941763       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1005 20:21:52.942467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1005 20:21:52.941817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 20:21:52.942484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1005 20:21:52.940340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 20:21:52.942501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1005 20:21:52.942241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 20:21:52.942518       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1005 20:21:52.942754       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 20:21:52.942812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1005 20:21:52.942853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 20:21:52.942922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1005 20:21:52.942775       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1005 20:21:52.942985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1005 20:21:53.845978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 20:21:53.846019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1005 20:21:53.924578       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 20:21:53.924617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1005 20:21:53.958889       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1005 20:21:53.958922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1005 20:21:54.259092       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 20:21:54.259136       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1005 20:21:56.636779       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 05 20:22:08 multinode-401792 kubelet[1593]: I1005 20:22:08.732193    1593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b1ee02-b425-4768-aeda-451e53baaaa6-lib-modules\") pod \"kindnet-fnck9\" (UID: \"75b1ee02-b425-4768-aeda-451e53baaaa6\") " pod="kube-system/kindnet-fnck9"
	Oct 05 20:22:08 multinode-401792 kubelet[1593]: I1005 20:22:08.735248    1593 topology_manager.go:215] "Topology Admit Handler" podUID="386ee581-e207-45ad-a08c-86a0804a2233" podNamespace="kube-system" podName="kube-proxy-l9dpz"
	Oct 05 20:22:08 multinode-401792 kubelet[1593]: I1005 20:22:08.933733    1593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/386ee581-e207-45ad-a08c-86a0804a2233-lib-modules\") pod \"kube-proxy-l9dpz\" (UID: \"386ee581-e207-45ad-a08c-86a0804a2233\") " pod="kube-system/kube-proxy-l9dpz"
	Oct 05 20:22:08 multinode-401792 kubelet[1593]: I1005 20:22:08.933818    1593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/386ee581-e207-45ad-a08c-86a0804a2233-kube-proxy\") pod \"kube-proxy-l9dpz\" (UID: \"386ee581-e207-45ad-a08c-86a0804a2233\") " pod="kube-system/kube-proxy-l9dpz"
	Oct 05 20:22:08 multinode-401792 kubelet[1593]: I1005 20:22:08.933908    1593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/386ee581-e207-45ad-a08c-86a0804a2233-xtables-lock\") pod \"kube-proxy-l9dpz\" (UID: \"386ee581-e207-45ad-a08c-86a0804a2233\") " pod="kube-system/kube-proxy-l9dpz"
	Oct 05 20:22:08 multinode-401792 kubelet[1593]: I1005 20:22:08.934022    1593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnk6c\" (UniqueName: \"kubernetes.io/projected/386ee581-e207-45ad-a08c-86a0804a2233-kube-api-access-mnk6c\") pod \"kube-proxy-l9dpz\" (UID: \"386ee581-e207-45ad-a08c-86a0804a2233\") " pod="kube-system/kube-proxy-l9dpz"
	Oct 05 20:22:09 multinode-401792 kubelet[1593]: W1005 20:22:09.068031    1593 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17/crio-3fc0eefe37eea2bbac628aa9d9db9cd7b6fa42299fe494908d029aff07dd3f3b WatchSource:0}: Error finding container 3fc0eefe37eea2bbac628aa9d9db9cd7b6fa42299fe494908d029aff07dd3f3b: Status 404 returned error can't find the container with id 3fc0eefe37eea2bbac628aa9d9db9cd7b6fa42299fe494908d029aff07dd3f3b
	Oct 05 20:22:09 multinode-401792 kubelet[1593]: W1005 20:22:09.379855    1593 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17/crio-ebea63ab0257c50286f021dfa7fb995d2072b84502ea8a81d888b5aabe030b4c WatchSource:0}: Error finding container ebea63ab0257c50286f021dfa7fb995d2072b84502ea8a81d888b5aabe030b4c: Status 404 returned error can't find the container with id ebea63ab0257c50286f021dfa7fb995d2072b84502ea8a81d888b5aabe030b4c
	Oct 05 20:22:10 multinode-401792 kubelet[1593]: I1005 20:22:10.034801    1593 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-l9dpz" podStartSLOduration=2.03475009 podCreationTimestamp="2023-10-05 20:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-05 20:22:10.03461201 +0000 UTC m=+14.300599149" watchObservedRunningTime="2023-10-05 20:22:10.03475009 +0000 UTC m=+14.300737230"
	Oct 05 20:22:39 multinode-401792 kubelet[1593]: I1005 20:22:39.752549    1593 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 05 20:22:39 multinode-401792 kubelet[1593]: I1005 20:22:39.778360    1593 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-fnck9" podStartSLOduration=31.778300995 podCreationTimestamp="2023-10-05 20:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-05 20:22:10.045251414 +0000 UTC m=+14.311238573" watchObservedRunningTime="2023-10-05 20:22:39.778300995 +0000 UTC m=+44.044288138"
	Oct 05 20:22:39 multinode-401792 kubelet[1593]: I1005 20:22:39.778835    1593 topology_manager.go:215] "Topology Admit Handler" podUID="6db951fc-21de-44c9-9e94-cfe1ab7ac040" podNamespace="kube-system" podName="coredns-5dd5756b68-nctb6"
	Oct 05 20:22:39 multinode-401792 kubelet[1593]: I1005 20:22:39.779012    1593 topology_manager.go:215] "Topology Admit Handler" podUID="55fb2b0c-b3ba-4b56-b893-95190206e5ff" podNamespace="kube-system" podName="storage-provisioner"
	Oct 05 20:22:39 multinode-401792 kubelet[1593]: I1005 20:22:39.961629    1593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/55fb2b0c-b3ba-4b56-b893-95190206e5ff-tmp\") pod \"storage-provisioner\" (UID: \"55fb2b0c-b3ba-4b56-b893-95190206e5ff\") " pod="kube-system/storage-provisioner"
	Oct 05 20:22:39 multinode-401792 kubelet[1593]: I1005 20:22:39.961689    1593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsj49\" (UniqueName: \"kubernetes.io/projected/6db951fc-21de-44c9-9e94-cfe1ab7ac040-kube-api-access-zsj49\") pod \"coredns-5dd5756b68-nctb6\" (UID: \"6db951fc-21de-44c9-9e94-cfe1ab7ac040\") " pod="kube-system/coredns-5dd5756b68-nctb6"
	Oct 05 20:22:39 multinode-401792 kubelet[1593]: I1005 20:22:39.961710    1593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwm5\" (UniqueName: \"kubernetes.io/projected/55fb2b0c-b3ba-4b56-b893-95190206e5ff-kube-api-access-bwwm5\") pod \"storage-provisioner\" (UID: \"55fb2b0c-b3ba-4b56-b893-95190206e5ff\") " pod="kube-system/storage-provisioner"
	Oct 05 20:22:39 multinode-401792 kubelet[1593]: I1005 20:22:39.961736    1593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6db951fc-21de-44c9-9e94-cfe1ab7ac040-config-volume\") pod \"coredns-5dd5756b68-nctb6\" (UID: \"6db951fc-21de-44c9-9e94-cfe1ab7ac040\") " pod="kube-system/coredns-5dd5756b68-nctb6"
	Oct 05 20:22:40 multinode-401792 kubelet[1593]: W1005 20:22:40.116247    1593 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17/crio-9bcef1e5fdb7f041e756c515ed4d66637817f3b4051a32eea7c79fe4de5fef2d WatchSource:0}: Error finding container 9bcef1e5fdb7f041e756c515ed4d66637817f3b4051a32eea7c79fe4de5fef2d: Status 404 returned error can't find the container with id 9bcef1e5fdb7f041e756c515ed4d66637817f3b4051a32eea7c79fe4de5fef2d
	Oct 05 20:22:40 multinode-401792 kubelet[1593]: W1005 20:22:40.117333    1593 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17/crio-5f3868a0c8977f4801e05ddcd66ea5236fb9f9b83caac047f63acdafdf4fbe6a WatchSource:0}: Error finding container 5f3868a0c8977f4801e05ddcd66ea5236fb9f9b83caac047f63acdafdf4fbe6a: Status 404 returned error can't find the container with id 5f3868a0c8977f4801e05ddcd66ea5236fb9f9b83caac047f63acdafdf4fbe6a
	Oct 05 20:22:41 multinode-401792 kubelet[1593]: I1005 20:22:41.093286    1593 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-nctb6" podStartSLOduration=33.093232349 podCreationTimestamp="2023-10-05 20:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-05 20:22:41.092188392 +0000 UTC m=+45.358175532" watchObservedRunningTime="2023-10-05 20:22:41.093232349 +0000 UTC m=+45.359219489"
	Oct 05 20:22:41 multinode-401792 kubelet[1593]: I1005 20:22:41.115010    1593 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.114966919 podCreationTimestamp="2023-10-05 20:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-05 20:22:41.114646779 +0000 UTC m=+45.380633920" watchObservedRunningTime="2023-10-05 20:22:41.114966919 +0000 UTC m=+45.380954058"
	Oct 05 20:23:31 multinode-401792 kubelet[1593]: I1005 20:23:31.984633    1593 topology_manager.go:215] "Topology Admit Handler" podUID="e0c273f9-2a0f-40ef-acaa-677f0f07d19c" podNamespace="default" podName="busybox-5bc68d56bd-zj2tk"
	Oct 05 20:23:32 multinode-401792 kubelet[1593]: I1005 20:23:32.145123    1593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsqjh\" (UniqueName: \"kubernetes.io/projected/e0c273f9-2a0f-40ef-acaa-677f0f07d19c-kube-api-access-dsqjh\") pod \"busybox-5bc68d56bd-zj2tk\" (UID: \"e0c273f9-2a0f-40ef-acaa-677f0f07d19c\") " pod="default/busybox-5bc68d56bd-zj2tk"
	Oct 05 20:23:32 multinode-401792 kubelet[1593]: W1005 20:23:32.348028    1593 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17/crio-3d4d5e7a3d4c09129f19f3ff465edc37eab41bf45d15ffaf7b429e8a668bc4f8 WatchSource:0}: Error finding container 3d4d5e7a3d4c09129f19f3ff465edc37eab41bf45d15ffaf7b429e8a668bc4f8: Status 404 returned error can't find the container with id 3d4d5e7a3d4c09129f19f3ff465edc37eab41bf45d15ffaf7b429e8a668bc4f8
	Oct 05 20:23:33 multinode-401792 kubelet[1593]: I1005 20:23:33.191189    1593 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-zj2tk" podStartSLOduration=1.593193881 podCreationTimestamp="2023-10-05 20:23:31 +0000 UTC" firstStartedPulling="2023-10-05 20:23:32.352830792 +0000 UTC m=+96.618817923" lastFinishedPulling="2023-10-05 20:23:32.950764785 +0000 UTC m=+97.216751907" observedRunningTime="2023-10-05 20:23:33.191009039 +0000 UTC m=+97.456996181" watchObservedRunningTime="2023-10-05 20:23:33.191127865 +0000 UTC m=+97.457115003"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-401792 -n multinode-401792
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-401792 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (78.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.2442405766.exe start -p running-upgrade-591577 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.2442405766.exe start -p running-upgrade-591577 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m14.057568214s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-591577 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-591577 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.260304653s)

                                                
                                                
-- stdout --
	* [running-upgrade-591577] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-591577 in cluster running-upgrade-591577
	* Pulling base image ...
	* Updating the running docker "running-upgrade-591577" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:35:40.103344  518600 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:35:40.103627  518600 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:35:40.103637  518600 out.go:309] Setting ErrFile to fd 2...
	I1005 20:35:40.103642  518600 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:35:40.103876  518600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	I1005 20:35:40.104437  518600 out.go:303] Setting JSON to false
	I1005 20:35:40.105951  518600 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8269,"bootTime":1696529871,"procs":795,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:35:40.106012  518600 start.go:138] virtualization: kvm guest
	I1005 20:35:40.108208  518600 out.go:177] * [running-upgrade-591577] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:35:40.109578  518600 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:35:40.110842  518600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:35:40.109713  518600 notify.go:220] Checking for updates...
	I1005 20:35:40.113232  518600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:35:40.114576  518600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	I1005 20:35:40.115837  518600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:35:40.117079  518600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:35:40.118641  518600 config.go:182] Loaded profile config "running-upgrade-591577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1005 20:35:40.118663  518600 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 20:35:40.120437  518600 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1005 20:35:40.121903  518600 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:35:40.150843  518600 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:35:40.150937  518600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:35:40.212487  518600 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:72 SystemTime:2023-10-05 20:35:40.20162562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:35:40.212597  518600 docker.go:294] overlay module found
	I1005 20:35:40.214359  518600 out.go:177] * Using the docker driver based on existing profile
	I1005 20:35:40.215637  518600 start.go:298] selected driver: docker
	I1005 20:35:40.215657  518600 start.go:902] validating driver "docker" against &{Name:running-upgrade-591577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-591577 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1005 20:35:40.215772  518600 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:35:40.216618  518600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:35:40.274950  518600 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:72 SystemTime:2023-10-05 20:35:40.266068849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:35:40.275335  518600 cni.go:84] Creating CNI manager for ""
	I1005 20:35:40.275371  518600 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1005 20:35:40.275384  518600 start_flags.go:321] config:
	{Name:running-upgrade-591577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-591577 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s}
	I1005 20:35:40.278084  518600 out.go:177] * Starting control plane node running-upgrade-591577 in cluster running-upgrade-591577
	I1005 20:35:40.279246  518600 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 20:35:40.280564  518600 out.go:177] * Pulling base image ...
	I1005 20:35:40.281883  518600 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1005 20:35:40.281973  518600 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:35:40.298362  518600 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 20:35:40.298385  518600 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	W1005 20:35:40.312528  518600 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1005 20:35:40.312655  518600 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/running-upgrade-591577/config.json ...
	I1005 20:35:40.312830  518600 cache.go:107] acquiring lock: {Name:mk2119f3f7cd88f2a80c80cd2a38098de35a95a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:35:40.312871  518600 cache.go:107] acquiring lock: {Name:mk9e4d863e4cff0098cccb5d89ee3b312d8ea8d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:35:40.312883  518600 cache.go:107] acquiring lock: {Name:mk84dfc6392d95f4289c0356633119169d1870e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:35:40.312887  518600 cache.go:107] acquiring lock: {Name:mk72208b8108bf0961f44c766d0b43524faf7eec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:35:40.312828  518600 cache.go:107] acquiring lock: {Name:mkd5f349852f6a130d7eaffc0f3893ec2d673f49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:35:40.312865  518600 cache.go:195] Successfully downloaded all kic artifacts
	I1005 20:35:40.312897  518600 cache.go:107] acquiring lock: {Name:mk106368a05ecc723c277e629968ea58b833b64a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:35:40.312932  518600 cache.go:115] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1005 20:35:40.312936  518600 cache.go:115] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1005 20:35:40.312946  518600 start.go:365] acquiring machines lock for running-upgrade-591577: {Name:mk2b99e52a1cf8b0f26ba3f9e5ec27709fe1a6d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:35:40.312946  518600 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 76.914µs
	I1005 20:35:40.312943  518600 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 123.787µs
	I1005 20:35:40.312957  518600 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1005 20:35:40.312959  518600 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1005 20:35:40.312963  518600 cache.go:115] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1005 20:35:40.312975  518600 cache.go:115] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1005 20:35:40.312973  518600 cache.go:107] acquiring lock: {Name:mkc719a28697e9be0d559521f511fc804ee5101e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:35:40.312987  518600 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 120.292µs
	I1005 20:35:40.312845  518600 cache.go:107] acquiring lock: {Name:mked23397909bf68513b62ec994bb014e2c731aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:35:40.313009  518600 cache.go:115] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1005 20:35:40.313011  518600 cache.go:115] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1005 20:35:40.313012  518600 start.go:369] acquired machines lock for "running-upgrade-591577" in 54.882µs
	I1005 20:35:40.313014  518600 cache.go:115] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1005 20:35:40.313015  518600 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 121.723µs
	I1005 20:35:40.313018  518600 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 199.845µs
	I1005 20:35:40.313026  518600 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1005 20:35:40.313027  518600 start.go:96] Skipping create...Using existing machine configuration
	I1005 20:35:40.313029  518600 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1005 20:35:40.313031  518600 cache.go:115] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1005 20:35:40.312973  518600 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 93.424µs
	I1005 20:35:40.313040  518600 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1005 20:35:40.313042  518600 fix.go:54] fixHost starting: m01
	I1005 20:35:40.313000  518600 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1005 20:35:40.313024  518600 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 52.711µs
	I1005 20:35:40.313052  518600 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1005 20:35:40.313039  518600 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 202.74µs
	I1005 20:35:40.313060  518600 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1005 20:35:40.313067  518600 cache.go:87] Successfully saved all images to host disk.
	I1005 20:35:40.313332  518600 cli_runner.go:164] Run: docker container inspect running-upgrade-591577 --format={{.State.Status}}
	I1005 20:35:40.330063  518600 fix.go:102] recreateIfNeeded on running-upgrade-591577: state=Running err=<nil>
	W1005 20:35:40.330084  518600 fix.go:128] unexpected machine state, will restart: <nil>
	I1005 20:35:40.331938  518600 out.go:177] * Updating the running docker "running-upgrade-591577" container ...
	I1005 20:35:40.333120  518600 machine.go:88] provisioning docker machine ...
	I1005 20:35:40.333143  518600 ubuntu.go:169] provisioning hostname "running-upgrade-591577"
	I1005 20:35:40.333192  518600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-591577
	I1005 20:35:40.356191  518600 main.go:141] libmachine: Using SSH client type: native
	I1005 20:35:40.356680  518600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33251 <nil> <nil>}
	I1005 20:35:40.356703  518600 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-591577 && echo "running-upgrade-591577" | sudo tee /etc/hostname
	I1005 20:35:40.470781  518600 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-591577
	
	I1005 20:35:40.470855  518600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-591577
	I1005 20:35:40.488271  518600 main.go:141] libmachine: Using SSH client type: native
	I1005 20:35:40.488588  518600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33251 <nil> <nil>}
	I1005 20:35:40.488607  518600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-591577' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-591577/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-591577' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 20:35:40.594912  518600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:35:40.594940  518600 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-334135/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-334135/.minikube}
	I1005 20:35:40.594963  518600 ubuntu.go:177] setting up certificates
	I1005 20:35:40.594975  518600 provision.go:83] configureAuth start
	I1005 20:35:40.595033  518600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-591577
	I1005 20:35:40.610859  518600 provision.go:138] copyHostCerts
	I1005 20:35:40.610923  518600 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem, removing ...
	I1005 20:35:40.610940  518600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem
	I1005 20:35:40.611004  518600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem (1675 bytes)
	I1005 20:35:40.611139  518600 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem, removing ...
	I1005 20:35:40.611151  518600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem
	I1005 20:35:40.611190  518600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem (1078 bytes)
	I1005 20:35:40.611280  518600 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem, removing ...
	I1005 20:35:40.611295  518600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem
	I1005 20:35:40.611333  518600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem (1123 bytes)
	I1005 20:35:40.611399  518600 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-591577 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-591577]
	I1005 20:35:40.789058  518600 provision.go:172] copyRemoteCerts
	I1005 20:35:40.789119  518600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 20:35:40.789175  518600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-591577
	I1005 20:35:40.812595  518600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33251 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/running-upgrade-591577/id_rsa Username:docker}
	I1005 20:35:40.895347  518600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1005 20:35:40.914056  518600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1005 20:35:40.934667  518600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 20:35:40.958851  518600 provision.go:86] duration metric: configureAuth took 363.858065ms
	I1005 20:35:40.958888  518600 ubuntu.go:193] setting minikube options for container-runtime
	I1005 20:35:40.959139  518600 config.go:182] Loaded profile config "running-upgrade-591577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1005 20:35:40.959275  518600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-591577
	I1005 20:35:40.982703  518600 main.go:141] libmachine: Using SSH client type: native
	I1005 20:35:40.983016  518600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33251 <nil> <nil>}
	I1005 20:35:40.983033  518600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 20:35:41.433108  518600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 20:35:41.433139  518600 machine.go:91] provisioned docker machine in 1.100004338s
	I1005 20:35:41.433153  518600 start.go:300] post-start starting for "running-upgrade-591577" (driver="docker")
	I1005 20:35:41.433167  518600 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 20:35:41.433232  518600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 20:35:41.433282  518600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-591577
	I1005 20:35:41.455069  518600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33251 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/running-upgrade-591577/id_rsa Username:docker}
	I1005 20:35:41.540454  518600 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 20:35:41.543724  518600 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 20:35:41.543749  518600 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 20:35:41.543762  518600 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 20:35:41.543770  518600 info.go:137] Remote host: Ubuntu 19.10
	I1005 20:35:41.543782  518600 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/addons for local assets ...
	I1005 20:35:41.543835  518600 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/files for local assets ...
	I1005 20:35:41.543916  518600 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem -> 3409292.pem in /etc/ssl/certs
	I1005 20:35:41.544021  518600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 20:35:41.554579  518600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem --> /etc/ssl/certs/3409292.pem (1708 bytes)
	I1005 20:35:41.578917  518600 start.go:303] post-start completed in 145.746757ms
	I1005 20:35:41.579007  518600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:35:41.579111  518600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-591577
	I1005 20:35:41.597086  518600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33251 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/running-upgrade-591577/id_rsa Username:docker}
	I1005 20:35:41.679433  518600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 20:35:41.683314  518600 fix.go:56] fixHost completed within 1.37027298s
	I1005 20:35:41.683336  518600 start.go:83] releasing machines lock for "running-upgrade-591577", held for 1.370314176s
	I1005 20:35:41.683407  518600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-591577
	I1005 20:35:41.700307  518600 ssh_runner.go:195] Run: cat /version.json
	I1005 20:35:41.700357  518600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-591577
	I1005 20:35:41.700379  518600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 20:35:41.700433  518600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-591577
	I1005 20:35:41.717253  518600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33251 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/running-upgrade-591577/id_rsa Username:docker}
	I1005 20:35:41.717631  518600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33251 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/running-upgrade-591577/id_rsa Username:docker}
	W1005 20:35:41.797649  518600 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1005 20:35:41.797721  518600 ssh_runner.go:195] Run: systemctl --version
	I1005 20:35:41.829205  518600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 20:35:41.885967  518600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 20:35:41.890053  518600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:35:41.904498  518600 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 20:35:41.904561  518600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:35:41.925911  518600 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1005 20:35:41.925930  518600 start.go:469] detecting cgroup driver to use...
	I1005 20:35:41.925962  518600 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:35:41.926006  518600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 20:35:41.951800  518600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 20:35:41.961210  518600 docker.go:197] disabling cri-docker service (if available) ...
	I1005 20:35:41.961253  518600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 20:35:41.970030  518600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 20:35:41.980131  518600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1005 20:35:41.988946  518600 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1005 20:35:41.988980  518600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 20:35:42.083631  518600 docker.go:213] disabling docker service ...
	I1005 20:35:42.083700  518600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 20:35:42.094250  518600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 20:35:42.103269  518600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 20:35:42.187588  518600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 20:35:42.279929  518600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 20:35:42.289796  518600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:35:42.302194  518600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1005 20:35:42.302245  518600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:35:42.311239  518600 out.go:177] 
	W1005 20:35:42.312522  518600 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1005 20:35:42.312541  518600 out.go:239] * 
	* 
	W1005 20:35:42.313412  518600 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1005 20:35:42.314820  518600 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-591577 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-05 20:35:42.335491716 +0000 UTC m=+1968.237174433
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-591577
helpers_test.go:235: (dbg) docker inspect running-upgrade-591577:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64ae3a4b5d354e9b891c8aa76c7a66f3cd0cdc449c4536aa36a9a5b4e685c544",
	        "Created": "2023-10-05T20:34:26.33165043Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500296,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T20:34:26.764356026Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/64ae3a4b5d354e9b891c8aa76c7a66f3cd0cdc449c4536aa36a9a5b4e685c544/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64ae3a4b5d354e9b891c8aa76c7a66f3cd0cdc449c4536aa36a9a5b4e685c544/hostname",
	        "HostsPath": "/var/lib/docker/containers/64ae3a4b5d354e9b891c8aa76c7a66f3cd0cdc449c4536aa36a9a5b4e685c544/hosts",
	        "LogPath": "/var/lib/docker/containers/64ae3a4b5d354e9b891c8aa76c7a66f3cd0cdc449c4536aa36a9a5b4e685c544/64ae3a4b5d354e9b891c8aa76c7a66f3cd0cdc449c4536aa36a9a5b4e685c544-json.log",
	        "Name": "/running-upgrade-591577",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "running-upgrade-591577:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/049ea0b2620a71a5eec941c71df4634e935598429da8f82cd48e7f5eb8f6067a-init/diff:/var/lib/docker/overlay2/9815efadfdc26a4bcf8e7f20ac341496a9a225ee6a5c408a44e15f504b124233/diff:/var/lib/docker/overlay2/9ab21677da314d27b731419f8682a0b0de4f8b3bcbb80a90fd5ed330b93e61b1/diff:/var/lib/docker/overlay2/bb81cc9f3ef18a7454300f2dfc72d160897f6c06ac24b1c31c414d79d0be14b9/diff:/var/lib/docker/overlay2/cc17efbd0fbcbc7a341dc889c33bc30d26dde40f806975dc04803cf78e55ddc9/diff:/var/lib/docker/overlay2/e3ecc476fe161ae9c62ca695fdf6c15cb21ee7dc37703b606113c1af1653dc89/diff:/var/lib/docker/overlay2/6057900343e873846c501d4cb70ea9255052dea99b8b33e3c147e8a3c55d02d0/diff:/var/lib/docker/overlay2/d1bbd0b6361747ae0267fa6e8e9145adda9c5c3ac435ad9adb762ddda9a96808/diff:/var/lib/docker/overlay2/6b028f0360363138f236dd3ecb58d0e92708a88609bc2ddbd0de4a114e3197dd/diff:/var/lib/docker/overlay2/d09ded89b11a8018356d56fee5990bef2faa19f451d3428853abaea2855ee9a3/diff:/var/lib/docker/overlay2/4c96b3
cf8536e207a16be8f7f991493b969280a1414fbe18c4922d3e68bf7d3e/diff:/var/lib/docker/overlay2/f083e36b3637a9b931b9fd5649a6a6f560bf79ef5bd7055072a1aa0cab61233a/diff:/var/lib/docker/overlay2/a99909f7f2cc57dbe5c69448eb213c20206a4106b0d70d07e806fef0bcd0d370/diff:/var/lib/docker/overlay2/506f65ffbd1b8bb8e31729314777f6d17b8a5d597eeaa1cebcb12f4d2fd706b3/diff:/var/lib/docker/overlay2/32fddf78596ffcfd937879ccd53f736ff40dd9116949b6e99e0dba334a023176/diff:/var/lib/docker/overlay2/a53add1509e4ccc8a16bad153c4e933c1f46266d3454a23e3bc07922fd4e38ef/diff:/var/lib/docker/overlay2/f851f9060ee1c1b0982aa359c61667927b6f92cc99ec1c58501d6e307eab7c4e/diff:/var/lib/docker/overlay2/1b54cea72a54bf94e4f2240f7047da4a3e1a03815afe0c90a25bf260827de60f/diff:/var/lib/docker/overlay2/06319410b7055ff7013e6a4f6a19e094f88fe522edb3cabc9bb11fd6626208de/diff:/var/lib/docker/overlay2/509c8231a4e1fa0f75694eb7f50f67b27ffceec3c22145922b7567d678ca58f0/diff:/var/lib/docker/overlay2/fb3d43f75ffecdc3043fc8ba8d937336324516017666ac56c9b6c03169436430/diff:/var/lib/d
ocker/overlay2/02f484515dd73848349a69c66142fb427a3e97ec8c0a2021aae1605743f48703/diff",
	                "MergedDir": "/var/lib/docker/overlay2/049ea0b2620a71a5eec941c71df4634e935598429da8f82cd48e7f5eb8f6067a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/049ea0b2620a71a5eec941c71df4634e935598429da8f82cd48e7f5eb8f6067a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/049ea0b2620a71a5eec941c71df4634e935598429da8f82cd48e7f5eb8f6067a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-591577",
	                "Source": "/var/lib/docker/volumes/running-upgrade-591577/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-591577",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-591577",
	                "name.minikube.sigs.k8s.io": "running-upgrade-591577",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc3c40b33ee205eb0ccbf3772f1cb4fccab97438d5f139b66db5f3b8906fde25",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33251"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33250"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33249"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dc3c40b33ee2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "c0051643626ddc875d0923ea2363b45d1d9ad90904ed00ca054c523e49e63c73",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "7599251d1d1f734467eff56afe30265a9e7428bc31a49b3410dc823a4dcba52f",
	                    "EndpointID": "c0051643626ddc875d0923ea2363b45d1d9ad90904ed00ca054c523e49e63c73",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-591577 -n running-upgrade-591577
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-591577 -n running-upgrade-591577: exit status 4 (301.458376ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1005 20:35:42.626393  519741 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-591577" does not appear in /home/jenkins/minikube-integration/17363-334135/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-591577" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-591577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-591577
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-591577: (1.855013863s)
--- FAIL: TestRunningBinaryUpgrade (78.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.55849901.exe start -p stopped-upgrade-969365 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1005 20:33:05.449922  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.55849901.exe start -p stopped-upgrade-969365 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m16.639763282s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.55849901.exe -p stopped-upgrade-969365 stop
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-969365 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-969365 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.24510305s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-969365] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-969365 in cluster stopped-upgrade-969365
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-969365" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:34:13.359385  494911 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:34:13.359577  494911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:34:13.359604  494911 out.go:309] Setting ErrFile to fd 2...
	I1005 20:34:13.359613  494911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:34:13.360017  494911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	I1005 20:34:13.360976  494911 out.go:303] Setting JSON to false
	I1005 20:34:13.362127  494911 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8183,"bootTime":1696529871,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:34:13.362200  494911 start.go:138] virtualization: kvm guest
	I1005 20:34:13.364111  494911 out.go:177] * [stopped-upgrade-969365] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:34:13.365937  494911 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:34:13.365969  494911 notify.go:220] Checking for updates...
	I1005 20:34:13.367257  494911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:34:13.368595  494911 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:34:13.369966  494911 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	I1005 20:34:13.371236  494911 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:34:13.372585  494911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:34:13.374186  494911 config.go:182] Loaded profile config "stopped-upgrade-969365": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1005 20:34:13.374224  494911 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 20:34:13.375858  494911 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1005 20:34:13.377139  494911 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:34:13.398898  494911 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:34:13.398971  494911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:34:13.462481  494911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:93 SystemTime:2023-10-05 20:34:13.452641362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:34:13.462624  494911 docker.go:294] overlay module found
	I1005 20:34:13.465368  494911 out.go:177] * Using the docker driver based on existing profile
	I1005 20:34:13.466765  494911 start.go:298] selected driver: docker
	I1005 20:34:13.466791  494911 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-969365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-969365 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1005 20:34:13.466894  494911 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:34:13.467799  494911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:34:13.524334  494911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:93 SystemTime:2023-10-05 20:34:13.514413237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:34:13.524644  494911 cni.go:84] Creating CNI manager for ""
	I1005 20:34:13.524672  494911 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1005 20:34:13.524688  494911 start_flags.go:321] config:
	{Name:stopped-upgrade-969365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-969365 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s}
	I1005 20:34:13.526985  494911 out.go:177] * Starting control plane node stopped-upgrade-969365 in cluster stopped-upgrade-969365
	I1005 20:34:13.528247  494911 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 20:34:13.529570  494911 out.go:177] * Pulling base image ...
	I1005 20:34:13.530849  494911 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1005 20:34:13.530987  494911 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:34:13.548804  494911 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 20:34:13.548827  494911 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	W1005 20:34:13.562835  494911 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1005 20:34:13.563037  494911 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/stopped-upgrade-969365/config.json ...
	I1005 20:34:13.563036  494911 cache.go:107] acquiring lock: {Name:mkc719a28697e9be0d559521f511fc804ee5101e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:34:13.563030  494911 cache.go:107] acquiring lock: {Name:mkd5f349852f6a130d7eaffc0f3893ec2d673f49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:34:13.563050  494911 cache.go:107] acquiring lock: {Name:mk2119f3f7cd88f2a80c80cd2a38098de35a95a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:34:13.563253  494911 cache.go:115] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1005 20:34:13.563266  494911 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 246.297µs
	I1005 20:34:13.563251  494911 cache.go:107] acquiring lock: {Name:mk9e4d863e4cff0098cccb5d89ee3b312d8ea8d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:34:13.563289  494911 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1005 20:34:13.563252  494911 cache.go:107] acquiring lock: {Name:mk84dfc6392d95f4289c0356633119169d1870e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:34:13.563316  494911 cache.go:195] Successfully downloaded all kic artifacts
	I1005 20:34:13.563301  494911 cache.go:107] acquiring lock: {Name:mked23397909bf68513b62ec994bb014e2c731aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:34:13.563308  494911 cache.go:107] acquiring lock: {Name:mk106368a05ecc723c277e629968ea58b833b64a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:34:13.563254  494911 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1005 20:34:13.563339  494911 cache.go:107] acquiring lock: {Name:mk72208b8108bf0961f44c766d0b43524faf7eec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:34:13.563440  494911 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1005 20:34:13.563448  494911 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I1005 20:34:13.563346  494911 start.go:365] acquiring machines lock for stopped-upgrade-969365: {Name:mk4c008e231cd96d2b4be92bc8c56955b0e65810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:34:13.563519  494911 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1005 20:34:13.563528  494911 start.go:369] acquired machines lock for "stopped-upgrade-969365" in 53.179µs
	I1005 20:34:13.563548  494911 start.go:96] Skipping create...Using existing machine configuration
	I1005 20:34:13.563561  494911 fix.go:54] fixHost starting: m01
	I1005 20:34:13.563254  494911 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I1005 20:34:13.563607  494911 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1005 20:34:13.563625  494911 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I1005 20:34:13.563844  494911 cli_runner.go:164] Run: docker container inspect stopped-upgrade-969365 --format={{.State.Status}}
	I1005 20:34:13.564267  494911 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1005 20:34:13.564454  494911 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I1005 20:34:13.565096  494911 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1005 20:34:13.565235  494911 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I1005 20:34:13.565651  494911 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I1005 20:34:13.565669  494911 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1005 20:34:13.565880  494911 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1005 20:34:13.585168  494911 fix.go:102] recreateIfNeeded on stopped-upgrade-969365: state=Stopped err=<nil>
	W1005 20:34:13.585202  494911 fix.go:128] unexpected machine state, will restart: <nil>
	I1005 20:34:13.588359  494911 out.go:177] * Restarting existing docker container for "stopped-upgrade-969365" ...
	I1005 20:34:13.589592  494911 cli_runner.go:164] Run: docker start stopped-upgrade-969365
	I1005 20:34:13.745005  494911 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I1005 20:34:13.769676  494911 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I1005 20:34:13.791202  494911 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1005 20:34:13.795732  494911 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I1005 20:34:13.806690  494911 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I1005 20:34:13.823747  494911 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1005 20:34:13.858678  494911 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1005 20:34:13.884901  494911 cache.go:157] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1005 20:34:13.884945  494911 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 321.606271ms
	I1005 20:34:13.884959  494911 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1005 20:34:13.905257  494911 cli_runner.go:164] Run: docker container inspect stopped-upgrade-969365 --format={{.State.Status}}
	I1005 20:34:13.930346  494911 kic.go:426] container "stopped-upgrade-969365" state is running.
	I1005 20:34:13.932935  494911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-969365
	I1005 20:34:13.953399  494911 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/stopped-upgrade-969365/config.json ...
	I1005 20:34:13.953656  494911 machine.go:88] provisioning docker machine ...
	I1005 20:34:13.953681  494911 ubuntu.go:169] provisioning hostname "stopped-upgrade-969365"
	I1005 20:34:13.953743  494911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-969365
	I1005 20:34:13.980417  494911 main.go:141] libmachine: Using SSH client type: native
	I1005 20:34:13.980964  494911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33248 <nil> <nil>}
	I1005 20:34:13.980984  494911 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-969365 && echo "stopped-upgrade-969365" | sudo tee /etc/hostname
	I1005 20:34:13.981974  494911 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55898->127.0.0.1:33248: read: connection reset by peer
	I1005 20:34:14.300576  494911 cache.go:157] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1005 20:34:14.300614  494911 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 737.301622ms
	I1005 20:34:14.300631  494911 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1005 20:34:14.772349  494911 cache.go:157] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1005 20:34:14.772383  494911 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.209169246s
	I1005 20:34:14.772400  494911 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1005 20:34:15.079601  494911 cache.go:157] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1005 20:34:15.079629  494911 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.516333546s
	I1005 20:34:15.079645  494911 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1005 20:34:15.158650  494911 cache.go:157] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1005 20:34:15.158685  494911 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.595637254s
	I1005 20:34:15.158706  494911 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1005 20:34:15.529660  494911 cache.go:157] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1005 20:34:15.529695  494911 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.966472296s
	I1005 20:34:15.529714  494911 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1005 20:34:15.558335  494911 cache.go:157] /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1005 20:34:15.558364  494911 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.995335347s
	I1005 20:34:15.558375  494911 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1005 20:34:15.558390  494911 cache.go:87] Successfully saved all images to host disk.
	I1005 20:34:17.124367  494911 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-969365
	
	I1005 20:34:17.124459  494911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-969365
	I1005 20:34:17.141569  494911 main.go:141] libmachine: Using SSH client type: native
	I1005 20:34:17.142105  494911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33248 <nil> <nil>}
	I1005 20:34:17.142135  494911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-969365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-969365/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-969365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 20:34:17.250528  494911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:34:17.250567  494911 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-334135/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-334135/.minikube}
	I1005 20:34:17.250609  494911 ubuntu.go:177] setting up certificates
	I1005 20:34:17.250626  494911 provision.go:83] configureAuth start
	I1005 20:34:17.250680  494911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-969365
	I1005 20:34:17.266581  494911 provision.go:138] copyHostCerts
	I1005 20:34:17.266645  494911 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem, removing ...
	I1005 20:34:17.266655  494911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem
	I1005 20:34:17.266711  494911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/ca.pem (1078 bytes)
	I1005 20:34:17.266806  494911 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem, removing ...
	I1005 20:34:17.266815  494911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem
	I1005 20:34:17.266839  494911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/cert.pem (1123 bytes)
	I1005 20:34:17.266913  494911 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem, removing ...
	I1005 20:34:17.266921  494911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem
	I1005 20:34:17.266941  494911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-334135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-334135/.minikube/key.pem (1675 bytes)
	I1005 20:34:17.266998  494911 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-969365 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-969365]
	I1005 20:34:17.634152  494911 provision.go:172] copyRemoteCerts
	I1005 20:34:17.634229  494911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 20:34:17.634276  494911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-969365
	I1005 20:34:17.675183  494911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33248 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/stopped-upgrade-969365/id_rsa Username:docker}
	I1005 20:34:17.768158  494911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1005 20:34:17.789721  494911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1005 20:34:17.810173  494911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 20:34:17.832782  494911 provision.go:86] duration metric: configureAuth took 582.136864ms
	I1005 20:34:17.832817  494911 ubuntu.go:193] setting minikube options for container-runtime
	I1005 20:34:17.833018  494911 config.go:182] Loaded profile config "stopped-upgrade-969365": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1005 20:34:17.833155  494911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-969365
	I1005 20:34:17.860445  494911 main.go:141] libmachine: Using SSH client type: native
	I1005 20:34:17.861053  494911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33248 <nil> <nil>}
	I1005 20:34:17.861089  494911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 20:34:18.589345  494911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 20:34:18.589379  494911 machine.go:91] provisioned docker machine in 4.635705129s
	I1005 20:34:18.589392  494911 start.go:300] post-start starting for "stopped-upgrade-969365" (driver="docker")
	I1005 20:34:18.589411  494911 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 20:34:18.589472  494911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 20:34:18.589529  494911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-969365
	I1005 20:34:18.641525  494911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33248 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/stopped-upgrade-969365/id_rsa Username:docker}
	I1005 20:34:18.732071  494911 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 20:34:18.735045  494911 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 20:34:18.735091  494911 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 20:34:18.735104  494911 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 20:34:18.735113  494911 info.go:137] Remote host: Ubuntu 19.10
	I1005 20:34:18.735133  494911 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/addons for local assets ...
	I1005 20:34:18.735201  494911 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-334135/.minikube/files for local assets ...
	I1005 20:34:18.735292  494911 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem -> 3409292.pem in /etc/ssl/certs
	I1005 20:34:18.735410  494911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 20:34:18.742846  494911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/ssl/certs/3409292.pem --> /etc/ssl/certs/3409292.pem (1708 bytes)
	I1005 20:34:18.767876  494911 start.go:303] post-start completed in 178.461013ms
	I1005 20:34:18.767956  494911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:34:18.768008  494911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-969365
	I1005 20:34:18.793624  494911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33248 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/stopped-upgrade-969365/id_rsa Username:docker}
	I1005 20:34:18.895185  494911 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 20:34:18.898972  494911 fix.go:56] fixHost completed within 5.335408788s
	I1005 20:34:18.899000  494911 start.go:83] releasing machines lock for "stopped-upgrade-969365", held for 5.335458888s
	I1005 20:34:18.899090  494911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-969365
	I1005 20:34:18.931257  494911 ssh_runner.go:195] Run: cat /version.json
	I1005 20:34:18.931335  494911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-969365
	I1005 20:34:18.931459  494911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 20:34:18.931530  494911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-969365
	I1005 20:34:18.980255  494911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33248 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/stopped-upgrade-969365/id_rsa Username:docker}
	I1005 20:34:18.988429  494911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33248 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/stopped-upgrade-969365/id_rsa Username:docker}
	W1005 20:34:19.098915  494911 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1005 20:34:19.098977  494911 ssh_runner.go:195] Run: systemctl --version
	I1005 20:34:19.102570  494911 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 20:34:19.148233  494911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 20:34:19.158894  494911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:34:19.173992  494911 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 20:34:19.174052  494911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:34:19.198389  494911 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1005 20:34:19.198410  494911 start.go:469] detecting cgroup driver to use...
	I1005 20:34:19.198441  494911 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:34:19.198488  494911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 20:34:19.218624  494911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 20:34:19.233943  494911 docker.go:197] disabling cri-docker service (if available) ...
	I1005 20:34:19.233984  494911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 20:34:19.245494  494911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 20:34:19.254969  494911 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1005 20:34:19.265058  494911 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1005 20:34:19.265108  494911 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 20:34:19.345386  494911 docker.go:213] disabling docker service ...
	I1005 20:34:19.345446  494911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 20:34:19.355568  494911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 20:34:19.366264  494911 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 20:34:19.437707  494911 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 20:34:19.515465  494911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 20:34:19.527286  494911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:34:19.542221  494911 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1005 20:34:19.542286  494911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 20:34:19.553684  494911 out.go:177] 
	W1005 20:34:19.555043  494911 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1005 20:34:19.555078  494911 out.go:239] * 
	* 
	W1005 20:34:19.556171  494911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1005 20:34:19.557464  494911 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-969365 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (83.86s)

                                                
                                    

Test pass (277/307)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.4
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.2/json-events 5.6
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.22
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
18 TestDownloadOnlyKic 1.31
19 TestBinaryMirror 0.76
20 TestOffline 82.25
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
25 TestAddons/Setup 129.76
27 TestAddons/parallel/Registry 14.43
29 TestAddons/parallel/InspektorGadget 11.71
30 TestAddons/parallel/MetricsServer 5.75
31 TestAddons/parallel/HelmTiller 9.38
33 TestAddons/parallel/CSI 91.02
34 TestAddons/parallel/Headlamp 11.98
35 TestAddons/parallel/CloudSpanner 6.28
36 TestAddons/parallel/LocalPath 11.03
39 TestAddons/serial/GCPAuth/Namespaces 0.13
40 TestAddons/StoppedEnableDisable 12.31
41 TestCertOptions 25.49
42 TestCertExpiration 227.55
44 TestForceSystemdFlag 27.01
45 TestForceSystemdEnv 25.39
47 TestKVMDriverInstallOrUpdate 3.15
51 TestErrorSpam/setup 25.46
52 TestErrorSpam/start 0.62
53 TestErrorSpam/status 0.93
54 TestErrorSpam/pause 1.59
55 TestErrorSpam/unpause 1.63
56 TestErrorSpam/stop 1.4
59 TestFunctional/serial/CopySyncFile 0
60 TestFunctional/serial/StartWithProxy 71.89
61 TestFunctional/serial/AuditLog 0
62 TestFunctional/serial/SoftStart 41.65
63 TestFunctional/serial/KubeContext 0.05
64 TestFunctional/serial/KubectlGetPods 0.07
67 TestFunctional/serial/CacheCmd/cache/add_remote 3.11
68 TestFunctional/serial/CacheCmd/cache/add_local 1.27
69 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
70 TestFunctional/serial/CacheCmd/cache/list 0.05
71 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
72 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
73 TestFunctional/serial/CacheCmd/cache/delete 0.09
74 TestFunctional/serial/MinikubeKubectlCmd 0.11
75 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
76 TestFunctional/serial/ExtraConfig 26.76
77 TestFunctional/serial/ComponentHealth 0.08
78 TestFunctional/serial/LogsCmd 1.5
79 TestFunctional/serial/LogsFileCmd 1.53
80 TestFunctional/serial/InvalidService 4.59
82 TestFunctional/parallel/ConfigCmd 0.41
83 TestFunctional/parallel/DashboardCmd 11.65
84 TestFunctional/parallel/DryRun 0.51
85 TestFunctional/parallel/InternationalLanguage 0.22
86 TestFunctional/parallel/StatusCmd 1.43
90 TestFunctional/parallel/ServiceCmdConnect 9.14
91 TestFunctional/parallel/AddonsCmd 0.16
92 TestFunctional/parallel/PersistentVolumeClaim 27
94 TestFunctional/parallel/SSHCmd 0.68
95 TestFunctional/parallel/CpCmd 1.39
96 TestFunctional/parallel/MySQL 23.29
97 TestFunctional/parallel/FileSync 0.48
98 TestFunctional/parallel/CertSync 2.17
102 TestFunctional/parallel/NodeLabels 0.1
104 TestFunctional/parallel/NonActiveRuntimeDisabled 0.87
106 TestFunctional/parallel/License 0.2
107 TestFunctional/parallel/ServiceCmd/DeployApp 11.28
108 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
109 TestFunctional/parallel/MountCmd/any-port 8.89
110 TestFunctional/parallel/ProfileCmd/profile_list 0.39
111 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
115 TestFunctional/parallel/MountCmd/specific-port 1.99
116 TestFunctional/parallel/ServiceCmd/List 0.81
117 TestFunctional/parallel/MountCmd/VerifyCleanup 2.13
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
120 TestFunctional/parallel/ServiceCmd/Format 0.38
121 TestFunctional/parallel/ServiceCmd/URL 0.39
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
124 TestFunctional/parallel/Version/short 0.06
125 TestFunctional/parallel/Version/components 1.51
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
130 TestFunctional/parallel/ImageCommands/ImageBuild 1.91
131 TestFunctional/parallel/ImageCommands/Setup 1.08
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.55
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.48
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.11
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.21
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.39
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.92
148 TestFunctional/delete_addon-resizer_images 0.07
149 TestFunctional/delete_my-image_image 0.02
150 TestFunctional/delete_minikube_cached_images 0.02
154 TestIngressAddonLegacy/StartLegacyK8sCluster 65.85
156 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.35
157 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.58
161 TestJSONOutput/start/Command 40.11
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.71
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.63
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 5.89
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.21
186 TestKicCustomNetwork/create_custom_network 34.34
187 TestKicCustomNetwork/use_default_bridge_network 27.75
188 TestKicExistingNetwork 26.11
189 TestKicCustomSubnet 25.34
190 TestKicStaticIP 28.68
191 TestMainNoArgs 0.05
192 TestMinikubeProfile 51.5
195 TestMountStart/serial/StartWithMountFirst 8.45
196 TestMountStart/serial/VerifyMountFirst 0.26
197 TestMountStart/serial/StartWithMountSecond 5.54
198 TestMountStart/serial/VerifyMountSecond 0.27
199 TestMountStart/serial/DeleteFirst 1.71
200 TestMountStart/serial/VerifyMountPostDelete 0.25
201 TestMountStart/serial/Stop 1.21
202 TestMountStart/serial/RestartStopped 7.15
203 TestMountStart/serial/VerifyMountPostStop 0.26
206 TestMultiNode/serial/FreshStart2Nodes 119.52
207 TestMultiNode/serial/DeployApp2Nodes 3.22
209 TestMultiNode/serial/AddNode 19.62
210 TestMultiNode/serial/ProfileList 0.28
211 TestMultiNode/serial/CopyFile 9.28
212 TestMultiNode/serial/StopNode 2.15
213 TestMultiNode/serial/StartAfterStop 11.77
214 TestMultiNode/serial/RestartKeepsNodes 115.77
215 TestMultiNode/serial/DeleteNode 4.74
216 TestMultiNode/serial/StopMultiNode 23.86
217 TestMultiNode/serial/RestartMultiNode 78.72
218 TestMultiNode/serial/ValidateNameConflict 23.32
223 TestPreload 148.73
225 TestScheduledStopUnix 99.89
228 TestInsufficientStorage 12.96
231 TestKubernetesUpgrade 355.18
232 TestMissingContainerUpgrade 152.9
234 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
235 TestStoppedBinaryUpgrade/Setup 0.52
236 TestNoKubernetes/serial/StartWithK8s 37.28
238 TestNoKubernetes/serial/StartWithStopK8s 9.75
239 TestNoKubernetes/serial/Start 5.58
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
241 TestNoKubernetes/serial/ProfileList 1.35
242 TestNoKubernetes/serial/Stop 1.75
243 TestNoKubernetes/serial/StartNoArgs 8.95
244 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
248 TestStoppedBinaryUpgrade/MinikubeLogs 0.65
253 TestNetworkPlugins/group/false 2.9
265 TestPause/serial/Start 76.44
266 TestPause/serial/SecondStartNoReconfiguration 40.86
267 TestPause/serial/Pause 0.75
268 TestPause/serial/VerifyStatus 0.3
269 TestPause/serial/Unpause 0.67
270 TestPause/serial/PauseAgain 0.84
271 TestPause/serial/DeletePaused 2.62
272 TestPause/serial/VerifyDeletedResources 0.6
274 TestStartStop/group/old-k8s-version/serial/FirstStart 120.74
276 TestStartStop/group/no-preload/serial/FirstStart 60.85
277 TestStartStop/group/no-preload/serial/DeployApp 8.33
278 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.84
279 TestStartStop/group/no-preload/serial/Stop 11.87
280 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
281 TestStartStop/group/no-preload/serial/SecondStart 341
282 TestStartStop/group/old-k8s-version/serial/DeployApp 7.36
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.7
284 TestStartStop/group/old-k8s-version/serial/Stop 11.84
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
286 TestStartStop/group/old-k8s-version/serial/SecondStart 412.48
288 TestStartStop/group/embed-certs/serial/FirstStart 70.37
290 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.68
291 TestStartStop/group/embed-certs/serial/DeployApp 9.42
292 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
293 TestStartStop/group/embed-certs/serial/Stop 11.94
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
295 TestStartStop/group/embed-certs/serial/SecondStart 346.86
296 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.37
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
298 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.98
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
300 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 337.22
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.02
302 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
303 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
304 TestStartStop/group/no-preload/serial/Pause 2.71
306 TestStartStop/group/newest-cni/serial/FirstStart 37.73
307 TestStartStop/group/newest-cni/serial/DeployApp 0
308 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.94
309 TestStartStop/group/newest-cni/serial/Stop 1.23
310 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
311 TestStartStop/group/newest-cni/serial/SecondStart 26.67
312 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
313 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
314 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
315 TestStartStop/group/newest-cni/serial/Pause 2.67
316 TestNetworkPlugins/group/auto/Start 41.2
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
320 TestStartStop/group/old-k8s-version/serial/Pause 3.06
321 TestNetworkPlugins/group/auto/KubeletFlags 0.32
322 TestNetworkPlugins/group/auto/NetCatPod 10.36
323 TestNetworkPlugins/group/kindnet/Start 71.85
324 TestNetworkPlugins/group/auto/DNS 0.17
325 TestNetworkPlugins/group/auto/Localhost 0.18
326 TestNetworkPlugins/group/auto/HairPin 0.17
327 TestNetworkPlugins/group/calico/Start 71.79
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.02
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
331 TestStartStop/group/embed-certs/serial/Pause 2.96
332 TestNetworkPlugins/group/custom-flannel/Start 62.29
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.02
334 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
335 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
336 TestNetworkPlugins/group/kindnet/NetCatPod 10.34
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.06
340 TestNetworkPlugins/group/kindnet/DNS 0.18
341 TestNetworkPlugins/group/kindnet/Localhost 0.18
342 TestNetworkPlugins/group/kindnet/HairPin 0.22
343 TestNetworkPlugins/group/enable-default-cni/Start 40
344 TestNetworkPlugins/group/calico/ControllerPod 5.04
345 TestNetworkPlugins/group/calico/KubeletFlags 0.29
346 TestNetworkPlugins/group/calico/NetCatPod 10.51
347 TestNetworkPlugins/group/flannel/Start 60.04
348 TestNetworkPlugins/group/calico/DNS 0.26
349 TestNetworkPlugins/group/calico/Localhost 0.23
350 TestNetworkPlugins/group/calico/HairPin 0.16
351 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
352 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
353 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
354 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.32
355 TestNetworkPlugins/group/custom-flannel/DNS 0.27
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
358 TestNetworkPlugins/group/bridge/Start 78.43
359 TestNetworkPlugins/group/enable-default-cni/DNS 33.22
360 TestNetworkPlugins/group/flannel/ControllerPod 5.02
361 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
362 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
363 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
364 TestNetworkPlugins/group/flannel/NetCatPod 10.31
365 TestNetworkPlugins/group/flannel/DNS 0.18
366 TestNetworkPlugins/group/flannel/Localhost 0.16
367 TestNetworkPlugins/group/flannel/HairPin 0.14
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
369 TestNetworkPlugins/group/bridge/NetCatPod 11.28
370 TestNetworkPlugins/group/bridge/DNS 0.16
371 TestNetworkPlugins/group/bridge/Localhost 0.13
372 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (8.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-096441 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-096441 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.402802426s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-096441
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-096441: exit status 85 (64.410822ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-096441 | jenkins | v1.31.2 | 05 Oct 23 20:02 UTC |          |
	|         | -p download-only-096441        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:02:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:02:54.176477  340941 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:02:54.176704  340941 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:02:54.176712  340941 out.go:309] Setting ErrFile to fd 2...
	I1005 20:02:54.176717  340941 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:02:54.176938  340941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	W1005 20:02:54.177061  340941 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17363-334135/.minikube/config/config.json: open /home/jenkins/minikube-integration/17363-334135/.minikube/config/config.json: no such file or directory
	I1005 20:02:54.177773  340941 out.go:303] Setting JSON to true
	I1005 20:02:54.178785  340941 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6303,"bootTime":1696529871,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:02:54.178866  340941 start.go:138] virtualization: kvm guest
	I1005 20:02:54.181797  340941 out.go:97] [download-only-096441] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:02:54.183769  340941 out.go:169] MINIKUBE_LOCATION=17363
	W1005 20:02:54.182002  340941 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball: no such file or directory
	I1005 20:02:54.182086  340941 notify.go:220] Checking for updates...
	I1005 20:02:54.186902  340941 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:02:54.188571  340941 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:02:54.189971  340941 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	I1005 20:02:54.191369  340941 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1005 20:02:54.194275  340941 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1005 20:02:54.194650  340941 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:02:54.220095  340941 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:02:54.220199  340941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:02:54.279414  340941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-10-05 20:02:54.269564342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:02:54.279524  340941 docker.go:294] overlay module found
	I1005 20:02:54.281434  340941 out.go:97] Using the docker driver based on user configuration
	I1005 20:02:54.281465  340941 start.go:298] selected driver: docker
	I1005 20:02:54.281471  340941 start.go:902] validating driver "docker" against <nil>
	I1005 20:02:54.281615  340941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:02:54.337067  340941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-10-05 20:02:54.328060301 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:02:54.337254  340941 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 20:02:54.337973  340941 start_flags.go:384] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1005 20:02:54.338235  340941 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1005 20:02:54.340285  340941 out.go:169] Using Docker driver with root privileges
	I1005 20:02:54.341847  340941 cni.go:84] Creating CNI manager for ""
	I1005 20:02:54.341893  340941 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 20:02:54.341907  340941 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 20:02:54.341925  340941 start_flags.go:321] config:
	{Name:download-only-096441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-096441 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:02:54.343589  340941 out.go:97] Starting control plane node download-only-096441 in cluster download-only-096441
	I1005 20:02:54.343612  340941 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 20:02:54.344929  340941 out.go:97] Pulling base image ...
	I1005 20:02:54.344963  340941 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1005 20:02:54.345126  340941 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:02:54.362811  340941 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 20:02:54.363037  340941 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1005 20:02:54.363156  340941 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 20:02:54.379217  340941 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1005 20:02:54.379244  340941 cache.go:57] Caching tarball of preloaded images
	I1005 20:02:54.379411  340941 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1005 20:02:54.381478  340941 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1005 20:02:54.381503  340941 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1005 20:02:54.477045  340941 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1005 20:02:57.621637  340941 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-096441"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (5.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-096441 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-096441 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.604328333s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (5.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-096441
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-096441: exit status 85 (62.399773ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-096441 | jenkins | v1.31.2 | 05 Oct 23 20:02 UTC |          |
	|         | -p download-only-096441        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-096441 | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC |          |
	|         | -p download-only-096441        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:03:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:03:02.646210  341096 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:03:02.646360  341096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:03:02.646369  341096 out.go:309] Setting ErrFile to fd 2...
	I1005 20:03:02.646374  341096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:03:02.646551  341096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	W1005 20:03:02.646671  341096 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17363-334135/.minikube/config/config.json: open /home/jenkins/minikube-integration/17363-334135/.minikube/config/config.json: no such file or directory
	I1005 20:03:02.647209  341096 out.go:303] Setting JSON to true
	I1005 20:03:02.648178  341096 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6312,"bootTime":1696529871,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:03:02.648253  341096 start.go:138] virtualization: kvm guest
	I1005 20:03:02.650476  341096 out.go:97] [download-only-096441] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:03:02.652177  341096 out.go:169] MINIKUBE_LOCATION=17363
	I1005 20:03:02.650736  341096 notify.go:220] Checking for updates...
	I1005 20:03:02.655199  341096 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:03:02.656935  341096 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:03:02.658399  341096 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	I1005 20:03:02.659978  341096 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1005 20:03:02.662788  341096 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1005 20:03:02.663561  341096 config.go:182] Loaded profile config "download-only-096441": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1005 20:03:02.663691  341096 start.go:810] api.Load failed for download-only-096441: filestore "download-only-096441": Docker machine "download-only-096441" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1005 20:03:02.663826  341096 driver.go:378] Setting default libvirt URI to qemu:///system
	W1005 20:03:02.663883  341096 start.go:810] api.Load failed for download-only-096441: filestore "download-only-096441": Docker machine "download-only-096441" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1005 20:03:02.689523  341096 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:03:02.689634  341096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:03:02.744514  341096 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 20:03:02.735375349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:03:02.744619  341096 docker.go:294] overlay module found
	I1005 20:03:02.746679  341096 out.go:97] Using the docker driver based on existing profile
	I1005 20:03:02.746707  341096 start.go:298] selected driver: docker
	I1005 20:03:02.746713  341096 start.go:902] validating driver "docker" against &{Name:download-only-096441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-096441 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:03:02.746921  341096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:03:02.802849  341096 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 20:03:02.793986182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:03:02.803869  341096 cni.go:84] Creating CNI manager for ""
	I1005 20:03:02.803906  341096 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 20:03:02.803926  341096 start_flags.go:321] config:
	{Name:download-only-096441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-096441 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:03:02.805957  341096 out.go:97] Starting control plane node download-only-096441 in cluster download-only-096441
	I1005 20:03:02.805983  341096 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 20:03:02.807446  341096 out.go:97] Pulling base image ...
	I1005 20:03:02.807487  341096 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 20:03:02.807601  341096 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:03:02.824692  341096 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 20:03:02.824866  341096 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1005 20:03:02.824899  341096 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1005 20:03:02.824910  341096 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1005 20:03:02.824928  341096 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1005 20:03:02.843695  341096 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1005 20:03:02.843718  341096 cache.go:57] Caching tarball of preloaded images
	I1005 20:03:02.843882  341096 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 20:03:02.845846  341096 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1005 20:03:02.845868  341096 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I1005 20:03:02.881065  341096 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:63ef340a9dae90462e676325aa502af3 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1005 20:03:06.585465  341096 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I1005 20:03:06.585573  341096 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17363-334135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I1005 20:03:07.518146  341096 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1005 20:03:07.518301  341096 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/download-only-096441/config.json ...
	I1005 20:03:07.518515  341096 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 20:03:07.518704  341096 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17363-334135/.minikube/cache/linux/amd64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-096441"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-096441
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.31s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-170726 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-170726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-170726
--- PASS: TestDownloadOnlyKic (1.31s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-924260 --alsologtostderr --binary-mirror http://127.0.0.1:42053 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-924260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-924260
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (82.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-615887 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-615887 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m19.960846658s)
helpers_test.go:175: Cleaning up "offline-crio-615887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-615887
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-615887: (2.293302986s)
--- PASS: TestOffline (82.25s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:926: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-029116
addons_test.go:926: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-029116: exit status 85 (51.002689ms)

                                                
                                                
-- stdout --
	* Profile "addons-029116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-029116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:937: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-029116
addons_test.go:937: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-029116: exit status 85 (49.975203ms)

                                                
                                                
-- stdout --
	* Profile "addons-029116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-029116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (129.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-029116 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-029116 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m9.75832701s)
--- PASS: TestAddons/Setup (129.76s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 14.138604ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-hn8mj" [aa7dd669-eb26-4f18-b687-fa48e28bb06e] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014753876s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fhzqz" [370dc78f-71d6-4ae7-9f2a-8b5fb1cbd997] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.049401868s
addons_test.go:338: (dbg) Run:  kubectl --context addons-029116 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-029116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-029116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.531602069s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 ip
2023/10/05 20:05:34 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:836: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-99wrb" [8b4bd2c6-a9fc-4e61-b68c-982e26927731] Running
addons_test.go:836: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.02388867s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-029116
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-029116: (6.680379201s)
--- PASS: TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.520194ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-mkm8k" [22545870-82ee-4342-915b-7e2b9b12b4c5] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012798447s
addons_test.go:413: (dbg) Run:  kubectl --context addons-029116 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.38s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:454: tiller-deploy stabilized in 6.029662ms
addons_test.go:456: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-sr5rj" [ebf369dd-a3fd-4f09-b54b-34350f02a788] Running
addons_test.go:456: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015485574s
addons_test.go:471: (dbg) Run:  kubectl --context addons-029116 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:471: (dbg) Done: kubectl --context addons-029116 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.796889632s)
addons_test.go:488: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (91.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: csi-hostpath-driver pods stabilized in 6.038778ms
addons_test.go:562: (dbg) Run:  kubectl --context addons-029116 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:562: (dbg) Done: kubectl --context addons-029116 create -f testdata/csi-hostpath-driver/pvc.yaml: (1.079776207s)
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-029116 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2ef8279a-e798-4ddb-84bc-34ca62d3aabe] Pending
helpers_test.go:344: "task-pv-pod" [2ef8279a-e798-4ddb-84bc-34ca62d3aabe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2ef8279a-e798-4ddb-84bc-34ca62d3aabe] Running
addons_test.go:577: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.010528547s
addons_test.go:582: (dbg) Run:  kubectl --context addons-029116 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-029116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-029116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-029116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-029116 delete pod task-pv-pod
addons_test.go:598: (dbg) Run:  kubectl --context addons-029116 delete pvc hpvc
addons_test.go:604: (dbg) Run:  kubectl --context addons-029116 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:614: (dbg) Run:  kubectl --context addons-029116 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:619: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7a343781-96c4-4328-94b7-88fc2ae40c08] Pending
helpers_test.go:344: "task-pv-pod-restore" [7a343781-96c4-4328-94b7-88fc2ae40c08] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7a343781-96c4-4328-94b7-88fc2ae40c08] Running
addons_test.go:619: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.011471021s
addons_test.go:624: (dbg) Run:  kubectl --context addons-029116 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Run:  kubectl --context addons-029116 delete pvc hpvc-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-029116 delete volumesnapshot new-snapshot-demo
addons_test.go:636: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:636: (dbg) Done: out/minikube-linux-amd64 -p addons-029116 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.719750852s)
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (91.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:822: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-029116 --alsologtostderr -v=1
addons_test.go:827: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-6nxs6" [a2bd44a4-6aea-43ee-942e-9aaf83e4ce04] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-6nxs6" [a2bd44a4-6aea-43ee-942e-9aaf83e4ce04] Running
addons_test.go:827: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.011534809s
--- PASS: TestAddons/parallel/Headlamp (11.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:855: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-76z97" [044d6dad-7075-4e8d-851f-685f9729d7cc] Running
addons_test.go:855: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.067480475s
addons_test.go:858: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-029116
addons_test.go:858: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-029116: (1.201444542s)
--- PASS: TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:871: (dbg) Run:  kubectl --context addons-029116 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:877: (dbg) Run:  kubectl --context addons-029116 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:881: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029116 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:884: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2e4f9afe-1dfa-4900-acc3-73a40b206f26] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2e4f9afe-1dfa-4900-acc3-73a40b206f26] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2e4f9afe-1dfa-4900-acc3-73a40b206f26] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:884: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.037169708s
addons_test.go:889: (dbg) Run:  kubectl --context addons-029116 get pvc test-pvc -o=json
addons_test.go:898: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 ssh "cat /opt/local-path-provisioner/pvc-4b90cbef-9395-48c7-bf53-d29fd7509af3_default_test-pvc/file1"
addons_test.go:910: (dbg) Run:  kubectl --context addons-029116 delete pod test-local-path
addons_test.go:914: (dbg) Run:  kubectl --context addons-029116 delete pvc test-pvc
addons_test.go:918: (dbg) Run:  out/minikube-linux-amd64 -p addons-029116 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:648: (dbg) Run:  kubectl --context addons-029116 create ns new-namespace
addons_test.go:662: (dbg) Run:  kubectl --context addons-029116 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-029116
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-029116: (12.061688841s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-029116
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-029116
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-029116
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (25.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-336573 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-336573 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.011477532s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-336573 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-336573 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-336573 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-336573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-336573
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-336573: (3.910772313s)
--- PASS: TestCertOptions (25.49s)

                                                
                                    
x
+
TestCertExpiration (227.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-107379 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-107379 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.007065935s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-107379 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-107379 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.284350711s)
helpers_test.go:175: Cleaning up "cert-expiration-107379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-107379
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-107379: (2.252622596s)
--- PASS: TestCertExpiration (227.55s)

                                                
                                    
x
+
TestForceSystemdFlag (27.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-544448 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-544448 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.486364377s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-544448 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-544448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-544448
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-544448: (2.277347743s)
--- PASS: TestForceSystemdFlag (27.01s)

                                                
                                    
x
+
TestForceSystemdEnv (25.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-683528 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-683528 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.04593665s)
helpers_test.go:175: Cleaning up "force-systemd-env-683528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-683528
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-683528: (2.342255957s)
--- PASS: TestForceSystemdEnv (25.39s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.15s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.15s)

                                                
                                    
x
+
TestErrorSpam/setup (25.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-962911 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-962911 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-962911 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-962911 --driver=docker  --container-runtime=crio: (25.454856017s)
--- PASS: TestErrorSpam/setup (25.46s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 stop: (1.21893753s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-962911 --log_dir /tmp/nospam-962911 stop
--- PASS: TestErrorSpam/stop (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17363-334135/.minikube/files/etc/test/nested/copy/340929/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (71.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-368978 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-368978 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m11.893805014s)
--- PASS: TestFunctional/serial/StartWithProxy (71.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-368978 --alsologtostderr -v=8
E1005 20:10:20.647499  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:20.653340  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:20.663747  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:20.684115  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:20.724506  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:20.804962  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:20.965496  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:21.286096  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:21.926894  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:23.207525  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:25.768102  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:30.888692  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
E1005 20:10:41.129800  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-368978 --alsologtostderr -v=8: (41.651324961s)
functional_test.go:659: soft start took 41.652168216s for "functional-368978" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-368978 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 cache add registry.k8s.io/pause:3.1
E1005 20:11:01.610665  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 cache add registry.k8s.io/pause:3.1: (1.008798604s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 cache add registry.k8s.io/pause:3.3: (1.11756967s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-368978 /tmp/TestFunctionalserialCacheCmdcacheadd_local773396586/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 cache add minikube-local-cache-test:functional-368978
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 cache delete minikube-local-cache-test:functional-368978
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-368978
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-368978 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.297547ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 kubectl -- --context functional-368978 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-368978 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (26.76s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-368978 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-368978 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (26.76386283s)
functional_test.go:757: restart took 26.764061912s for "functional-368978" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (26.76s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-368978 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 logs: (1.498267426s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 logs --file /tmp/TestFunctionalserialLogsFileCmd1721721766/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 logs --file /tmp/TestFunctionalserialLogsFileCmd1721721766/001/logs.txt: (1.525591399s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-368978 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-368978
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-368978: exit status 115 (352.870169ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31495 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-368978 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-368978 config get cpus: exit status 14 (80.111893ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-368978 config get cpus: exit status 14 (54.402676ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-368978 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-368978 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 374086: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-368978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-368978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.170825ms)

                                                
                                                
-- stdout --
	* [functional-368978] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:11:43.985088  372824 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:11:43.985374  372824 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:11:43.985383  372824 out.go:309] Setting ErrFile to fd 2...
	I1005 20:11:43.985388  372824 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:11:43.985628  372824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	I1005 20:11:43.986237  372824 out.go:303] Setting JSON to false
	I1005 20:11:43.987588  372824 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6833,"bootTime":1696529871,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:11:43.987678  372824 start.go:138] virtualization: kvm guest
	I1005 20:11:43.990163  372824 out.go:177] * [functional-368978] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:11:43.991825  372824 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:11:43.991879  372824 notify.go:220] Checking for updates...
	I1005 20:11:43.993267  372824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:11:43.994881  372824 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:11:43.996240  372824 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	I1005 20:11:43.997656  372824 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:11:43.999270  372824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:11:44.001183  372824 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:11:44.001836  372824 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:11:44.032590  372824 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:11:44.032713  372824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:11:44.112430  372824 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-05 20:11:44.100521507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:11:44.112547  372824 docker.go:294] overlay module found
	I1005 20:11:44.114608  372824 out.go:177] * Using the docker driver based on existing profile
	I1005 20:11:44.116080  372824 start.go:298] selected driver: docker
	I1005 20:11:44.116105  372824 start.go:902] validating driver "docker" against &{Name:functional-368978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-368978 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:11:44.116206  372824 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:11:44.118411  372824 out.go:177] 
	W1005 20:11:44.121446  372824 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1005 20:11:44.123053  372824 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-368978 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-368978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-368978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (219.495635ms)

                                                
                                                
-- stdout --
	* [functional-368978] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:11:43.783719  372649 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:11:43.783929  372649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:11:43.783970  372649 out.go:309] Setting ErrFile to fd 2...
	I1005 20:11:43.783993  372649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:11:43.784493  372649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	I1005 20:11:43.785336  372649 out.go:303] Setting JSON to false
	I1005 20:11:43.787014  372649 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6833,"bootTime":1696529871,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:11:43.787216  372649 start.go:138] virtualization: kvm guest
	I1005 20:11:43.789888  372649 out.go:177] * [functional-368978] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1005 20:11:43.791507  372649 notify.go:220] Checking for updates...
	I1005 20:11:43.791511  372649 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:11:43.793170  372649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:11:43.795088  372649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:11:43.796541  372649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	I1005 20:11:43.798016  372649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:11:43.799429  372649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:11:43.801758  372649 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:11:43.802495  372649 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:11:43.837642  372649 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:11:43.837777  372649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:11:43.916055  372649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-05 20:11:43.904640273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:11:43.916211  372649 docker.go:294] overlay module found
	I1005 20:11:43.918996  372649 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1005 20:11:43.920537  372649 start.go:298] selected driver: docker
	I1005 20:11:43.920562  372649 start.go:902] validating driver "docker" against &{Name:functional-368978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-368978 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:11:43.920718  372649 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:11:43.923420  372649 out.go:177] 
	W1005 20:11:43.925745  372649 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1005 20:11:43.927235  372649 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-368978 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-368978 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-t896q" [ec53137f-8fa0-4ab1-b9a3-3b41c7922571] Pending
helpers_test.go:344: "hello-node-connect-55497b8b78-t896q" [ec53137f-8fa0-4ab1-b9a3-3b41c7922571] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-t896q" [ec53137f-8fa0-4ab1-b9a3-3b41c7922571] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.070409842s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30251
functional_test.go:1674: http://192.168.49.2:30251: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-t896q

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30251
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8ac3245b-2e8a-4eb6-acf2-92dd09c1871f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014134399s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-368978 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-368978 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-368978 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-368978 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [096a79d2-907d-478e-b638-94578c30a4b5] Pending
helpers_test.go:344: "sp-pod" [096a79d2-907d-478e-b638-94578c30a4b5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [096a79d2-907d-478e-b638-94578c30a4b5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.028618188s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-368978 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-368978 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-368978 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [afbd384c-8b0c-42d0-9e36-42b4fe753536] Pending
helpers_test.go:344: "sp-pod" [afbd384c-8b0c-42d0-9e36-42b4fe753536] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [afbd384c-8b0c-42d0-9e36-42b4fe753536] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.027661392s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-368978 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "cat /etc/hostname"
2023/10/05 20:11:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh -n functional-368978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 cp functional-368978:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1861501435/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh -n functional-368978 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-368978 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-59wsq" [d8f0f2d9-858e-4cb9-9072-3ecc56f3691a] Pending
helpers_test.go:344: "mysql-859648c796-59wsq" [d8f0f2d9-858e-4cb9-9072-3ecc56f3691a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-59wsq" [d8f0f2d9-858e-4cb9-9072-3ecc56f3691a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.018351872s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-368978 exec mysql-859648c796-59wsq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-368978 exec mysql-859648c796-59wsq -- mysql -ppassword -e "show databases;": exit status 1 (158.208643ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-368978 exec mysql-859648c796-59wsq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-368978 exec mysql-859648c796-59wsq -- mysql -ppassword -e "show databases;": exit status 1 (147.722112ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-368978 exec mysql-859648c796-59wsq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/340929/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "sudo cat /etc/test/nested/copy/340929/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/340929.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "sudo cat /etc/ssl/certs/340929.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/340929.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "sudo cat /usr/share/ca-certificates/340929.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3409292.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "sudo cat /etc/ssl/certs/3409292.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3409292.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "sudo cat /usr/share/ca-certificates/3409292.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-368978 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-368978 ssh "sudo systemctl is-active docker": exit status 1 (436.19174ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-368978 ssh "sudo systemctl is-active containerd": exit status 1 (434.590498ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-368978 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-368978 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-pw2m2" [d8d1f86a-1692-4095-8000-94728ea05e01] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-pw2m2" [d8d1f86a-1692-4095-8000-94728ea05e01] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.021984442s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-368978 /tmp/TestFunctionalparallelMountCmdany-port2260171826/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696536702559468461" to /tmp/TestFunctionalparallelMountCmdany-port2260171826/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696536702559468461" to /tmp/TestFunctionalparallelMountCmdany-port2260171826/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696536702559468461" to /tmp/TestFunctionalparallelMountCmdany-port2260171826/001/test-1696536702559468461
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "findmnt -T /mount-9p | grep 9p"
E1005 20:11:42.570980  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-368978 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.394258ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  5 20:11 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  5 20:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  5 20:11 test-1696536702559468461
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh cat /mount-9p/test-1696536702559468461
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-368978 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [30a26bea-9730-4df5-8f92-1af856c98392] Pending
helpers_test.go:344: "busybox-mount" [30a26bea-9730-4df5-8f92-1af856c98392] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [30a26bea-9730-4df5-8f92-1af856c98392] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [30a26bea-9730-4df5-8f92-1af856c98392] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.015190762s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-368978 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-368978 /tmp/TestFunctionalparallelMountCmdany-port2260171826/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.89s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "335.838377ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "53.504604ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "440.729052ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "50.951559ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-368978 /tmp/TestFunctionalparallelMountCmdspecific-port2814880726/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-368978 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (322.530844ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-368978 /tmp/TestFunctionalparallelMountCmdspecific-port2814880726/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-368978 ssh "sudo umount -f /mount-9p": exit status 1 (338.341283ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-368978 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-368978 /tmp/TestFunctionalparallelMountCmdspecific-port2814880726/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-368978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626938042/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-368978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626938042/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-368978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626938042/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-368978 ssh "findmnt -T" /mount1: exit status 1 (581.427269ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-368978 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-368978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626938042/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-368978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626938042/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-368978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1626938042/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 service list -o json
functional_test.go:1493: Took "550.992834ms" to run "out/minikube-linux-amd64 -p functional-368978 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30089
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30089
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-368978 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-368978 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-368978 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 376359: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-368978 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 version -o=json --components: (1.505509212s)
--- PASS: TestFunctional/parallel/Version/components (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-368978 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-368978
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-368978 image ls --format short --alsologtostderr:
I1005 20:12:23.719549  379365 out.go:296] Setting OutFile to fd 1 ...
I1005 20:12:23.719872  379365 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:12:23.719885  379365 out.go:309] Setting ErrFile to fd 2...
I1005 20:12:23.719892  379365 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:12:23.720133  379365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
I1005 20:12:23.720836  379365 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 20:12:23.720989  379365 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 20:12:23.721464  379365 cli_runner.go:164] Run: docker container inspect functional-368978 --format={{.State.Status}}
I1005 20:12:23.741164  379365 ssh_runner.go:195] Run: systemctl --version
I1005 20:12:23.741217  379365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-368978
I1005 20:12:23.760606  379365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/functional-368978/id_rsa Username:docker}
I1005 20:12:23.856241  379365 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-368978 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.28.2            | 7a5d9d67a13f6 | 61.5MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/mysql                 | 5.7                | a5b7ceed40749 | 601MB  |
| registry.k8s.io/kube-apiserver          | v1.28.2            | cdcab12b2dd16 | 127MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| docker.io/library/nginx                 | alpine             | d571254277f6a | 44.4MB |
| docker.io/library/nginx                 | latest             | 61395b4c586da | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-368978  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.28.2            | 55f13c92defb1 | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.2            | c120fed2beb84 | 74.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-368978 image ls --format table --alsologtostderr:
I1005 20:12:23.968527  379502 out.go:296] Setting OutFile to fd 1 ...
I1005 20:12:23.968800  379502 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:12:23.968813  379502 out.go:309] Setting ErrFile to fd 2...
I1005 20:12:23.968820  379502 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:12:23.969078  379502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
I1005 20:12:23.969769  379502 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 20:12:23.969888  379502 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 20:12:23.970408  379502 cli_runner.go:164] Run: docker container inspect functional-368978 --format={{.State.Status}}
I1005 20:12:23.992804  379502 ssh_runner.go:195] Run: systemctl --version
I1005 20:12:23.992890  379502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-368978
I1005 20:12:24.017005  379502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/functional-368978/id_rsa Username:docker}
I1005 20:12:24.115874  379502 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-368978 image ls --format json --alsologtostderr:
[{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-368978"],"size":"34114467"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":["registr
y.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded","registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"74687895"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab","registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"61485878"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":["docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755","docker.io/library/nginx@sha256:b2888fc9cfe7cd9d6727aeb462d13c7c45dec413b66f2819a36c4a3cb9d4df75"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820094"},{"id":"56cc512116c8f894f11ce1995460ae
f1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"]
,"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4","repoDigests":["docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14","docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44434729"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613
594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"127149008"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"a5b7ceed4074932a04ea553af3124bb03b249affe14899e2cd746d1a63e12ecc","repoD
igests":["docker.io/library/mysql@sha256:a06310bb26d02a6118ae7fa825c172a0bf594e178c72230fc31674f348033270","docker.io/library/mysql@sha256:e857469c4d22da38abe1f1b60a0e0bf7b0a5812f6bea1e247e375aa1701db925"],"repoTags":["docker.io/library/mysql:5.7"],"size":"600779225"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4","registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"123171638"},{"id":"e6f18
16883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-368978 image ls --format json --alsologtostderr:
I1005 20:12:23.726879  379363 out.go:296] Setting OutFile to fd 1 ...
I1005 20:12:23.727082  379363 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:12:23.727094  379363 out.go:309] Setting ErrFile to fd 2...
I1005 20:12:23.727101  379363 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:12:23.727398  379363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
I1005 20:12:23.728315  379363 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 20:12:23.728486  379363 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 20:12:23.729103  379363 cli_runner.go:164] Run: docker container inspect functional-368978 --format={{.State.Status}}
I1005 20:12:23.750826  379363 ssh_runner.go:195] Run: systemctl --version
I1005 20:12:23.750903  379363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-368978
I1005 20:12:23.772380  379363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/functional-368978/id_rsa Username:docker}
I1005 20:12:23.871969  379363 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-368978 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "74687895"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
- docker.io/library/nginx@sha256:b2888fc9cfe7cd9d6727aeb462d13c7c45dec413b66f2819a36c4a3cb9d4df75
repoTags:
- docker.io/library/nginx:latest
size: "190820094"
- id: a5b7ceed4074932a04ea553af3124bb03b249affe14899e2cd746d1a63e12ecc
repoDigests:
- docker.io/library/mysql@sha256:a06310bb26d02a6118ae7fa825c172a0bf594e178c72230fc31674f348033270
- docker.io/library/mysql@sha256:e857469c4d22da38abe1f1b60a0e0bf7b0a5812f6bea1e247e375aa1701db925
repoTags:
- docker.io/library/mysql:5.7
size: "600779225"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-368978
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "127149008"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
- registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "61485878"
- id: d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4
repoDigests:
- docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14
- docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef
repoTags:
- docker.io/library/nginx:alpine
size: "44434729"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
- registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "123171638"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-368978 image ls --format yaml --alsologtostderr:
I1005 20:12:23.727931  379364 out.go:296] Setting OutFile to fd 1 ...
I1005 20:12:23.728057  379364 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:12:23.728068  379364 out.go:309] Setting ErrFile to fd 2...
I1005 20:12:23.728076  379364 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:12:23.728367  379364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
I1005 20:12:23.729212  379364 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 20:12:23.729363  379364 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 20:12:23.729902  379364 cli_runner.go:164] Run: docker container inspect functional-368978 --format={{.State.Status}}
I1005 20:12:23.758408  379364 ssh_runner.go:195] Run: systemctl --version
I1005 20:12:23.758477  379364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-368978
I1005 20:12:23.778046  379364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/functional-368978/id_rsa Username:docker}
I1005 20:12:23.876047  379364 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-368978 ssh pgrep buildkitd: exit status 1 (278.377278ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image build -t localhost/my-image:functional-368978 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 image build -t localhost/my-image:functional-368978 testdata/build --alsologtostderr: (1.40848716s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-368978 image build -t localhost/my-image:functional-368978 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 41cbb4d75c5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-368978
--> 417a7349616
Successfully tagged localhost/my-image:functional-368978
417a73496162062de583624b740eba5b66d6667a967e44e5e9510e4316479239
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-368978 image build -t localhost/my-image:functional-368978 testdata/build --alsologtostderr:
I1005 20:12:24.222649  379631 out.go:296] Setting OutFile to fd 1 ...
I1005 20:12:24.222811  379631 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:12:24.222826  379631 out.go:309] Setting ErrFile to fd 2...
I1005 20:12:24.222834  379631 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:12:24.223040  379631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
I1005 20:12:24.223737  379631 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 20:12:24.224355  379631 config.go:182] Loaded profile config "functional-368978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 20:12:24.224815  379631 cli_runner.go:164] Run: docker container inspect functional-368978 --format={{.State.Status}}
I1005 20:12:24.244524  379631 ssh_runner.go:195] Run: systemctl --version
I1005 20:12:24.244579  379631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-368978
I1005 20:12:24.263247  379631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/functional-368978/id_rsa Username:docker}
I1005 20:12:24.355978  379631 build_images.go:151] Building image from path: /tmp/build.2303430505.tar
I1005 20:12:24.356045  379631 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1005 20:12:24.365443  379631 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2303430505.tar
I1005 20:12:24.369228  379631 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2303430505.tar: stat -c "%s %y" /var/lib/minikube/build/build.2303430505.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2303430505.tar': No such file or directory
I1005 20:12:24.369261  379631 ssh_runner.go:362] scp /tmp/build.2303430505.tar --> /var/lib/minikube/build/build.2303430505.tar (3072 bytes)
I1005 20:12:24.395795  379631 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2303430505
I1005 20:12:24.405284  379631 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2303430505 -xf /var/lib/minikube/build/build.2303430505.tar
I1005 20:12:24.415241  379631 crio.go:297] Building image: /var/lib/minikube/build/build.2303430505
I1005 20:12:24.415304  379631 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-368978 /var/lib/minikube/build/build.2303430505 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1005 20:12:25.564130  379631 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-368978 /var/lib/minikube/build/build.2303430505 --cgroup-manager=cgroupfs: (1.148795407s)
I1005 20:12:25.564230  379631 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2303430505
I1005 20:12:25.573806  379631 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2303430505.tar
I1005 20:12:25.582955  379631 build_images.go:207] Built localhost/my-image:functional-368978 from /tmp/build.2303430505.tar
I1005 20:12:25.582996  379631 build_images.go:123] succeeded building to: functional-368978
I1005 20:12:25.583002  379631 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.052538945s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-368978
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-368978 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-368978 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5cafdb69-4ff8-4c9e-8400-e9d744134a7e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5cafdb69-4ff8-4c9e-8400-e9d744134a7e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.061820637s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image load --daemon gcr.io/google-containers/addon-resizer:functional-368978 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 image load --daemon gcr.io/google-containers/addon-resizer:functional-368978 --alsologtostderr: (6.250683398s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image load --daemon gcr.io/google-containers/addon-resizer:functional-368978 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 image load --daemon gcr.io/google-containers/addon-resizer:functional-368978 --alsologtostderr: (2.866281365s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-368978 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.76.17 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-368978 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image save gcr.io/google-containers/addon-resizer:functional-368978 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 image save gcr.io/google-containers/addon-resizer:functional-368978 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.213646752s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image rm gcr.io/google-containers/addon-resizer:functional-368978 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-368978 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.177729027s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-368978
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-368978 image save --daemon gcr.io/google-containers/addon-resizer:functional-368978 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-368978
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-368978
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-368978
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-368978
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (65.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-540731 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1005 20:13:04.491249  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-540731 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m5.844988948s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (65.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-540731 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-540731 addons enable ingress --alsologtostderr -v=5: (10.353122783s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-540731 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-275269 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1005 20:17:02.885630  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:17:23.366382  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-275269 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (40.111622759s)
--- PASS: TestJSONOutput/start/Command (40.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-275269 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-275269 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-275269 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-275269 --output=json --user=testUser: (5.888426968s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-306002 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-306002 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.670677ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db6fa06f-b005-4d64-beb9-692c0f82a1f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-306002] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"667d639d-0b27-4cb7-8a49-a17f06456ed3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17363"}}
	{"specversion":"1.0","id":"389da5a8-c51b-4bfe-85d0-35dc0666a7d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5a6b4762-546a-4b4c-81b7-416db372e7f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig"}}
	{"specversion":"1.0","id":"b584aa8c-e8f1-4a36-8e42-fcb3b969a730","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube"}}
	{"specversion":"1.0","id":"f2d34811-bafe-482d-a5b3-99bed083b309","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"25708093-9747-4e86-826b-d1c65b89508e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a558d334-54d8-4f06-b1b0-ec729e8354bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-306002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-306002
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-148750 --network=
E1005 20:18:04.327323  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-148750 --network=: (32.600561658s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-148750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-148750
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-148750: (1.723516792s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.34s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-480599 --network=bridge
E1005 20:18:51.662355  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:18:51.667724  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:18:51.678094  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:18:51.698418  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:18:51.738805  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:18:51.819168  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:18:51.979622  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-480599 --network=bridge: (26.14943992s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-480599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-480599
E1005 20:18:52.299832  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:18:52.940105  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-480599: (1.583798769s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.75s)

                                                
                                    
x
+
TestKicExistingNetwork (26.11s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-424169 --network=existing-network
E1005 20:18:54.221031  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:18:56.782889  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:19:01.903782  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:19:12.144113  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-424169 --network=existing-network: (23.94870168s)
helpers_test.go:175: Cleaning up "existing-network-424169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-424169
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-424169: (2.009416274s)
--- PASS: TestKicExistingNetwork (26.11s)

                                                
                                    
x
+
TestKicCustomSubnet (25.34s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-233937 --subnet=192.168.60.0/24
E1005 20:19:26.248325  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:19:32.625237  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-233937 --subnet=192.168.60.0/24: (23.619463749s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-233937 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-233937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-233937
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-233937: (1.697128017s)
--- PASS: TestKicCustomSubnet (25.34s)

                                                
                                    
x
+
TestKicStaticIP (28.68s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-938969 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-938969 --static-ip=192.168.200.200: (26.345977435s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-938969 ip
helpers_test.go:175: Cleaning up "static-ip-938969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-938969
E1005 20:20:13.585930  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-938969: (2.202092076s)
--- PASS: TestKicStaticIP (28.68s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (51.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-685839 --driver=docker  --container-runtime=crio
E1005 20:20:20.648331  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-685839 --driver=docker  --container-runtime=crio: (21.870573722s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-692811 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-692811 --driver=docker  --container-runtime=crio: (24.41268934s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-685839
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-692811
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-692811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-692811
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-692811: (1.913479445s)
helpers_test.go:175: Cleaning up "first-685839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-685839
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-685839: (2.260027483s)
--- PASS: TestMinikubeProfile (51.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-618829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-618829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.453175065s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-618829 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-638763 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-638763 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.543979986s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638763 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-618829 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-618829 --alsologtostderr -v=5: (1.7132695s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638763 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-638763
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-638763: (1.206402236s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-638763
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-638763: (6.151306037s)
--- PASS: TestMountStart/serial/RestartStopped (7.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638763 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-401792 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1005 20:21:35.506228  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:21:42.402784  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:22:10.089342  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-401792 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m59.06278956s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-401792 -- rollout status deployment/busybox: (1.439324468s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-bk8vz -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-zj2tk -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-bk8vz -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-zj2tk -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-bk8vz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-401792 -- exec busybox-5bc68d56bd-zj2tk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.22s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-401792 -v 3 --alsologtostderr
E1005 20:23:51.662590  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-401792 -v 3 --alsologtostderr: (19.000723066s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.62s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp testdata/cp-test.txt multinode-401792:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp multinode-401792:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1752522126/001/cp-test_multinode-401792.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp multinode-401792:/home/docker/cp-test.txt multinode-401792-m02:/home/docker/cp-test_multinode-401792_multinode-401792-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m02 "sudo cat /home/docker/cp-test_multinode-401792_multinode-401792-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp multinode-401792:/home/docker/cp-test.txt multinode-401792-m03:/home/docker/cp-test_multinode-401792_multinode-401792-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m03 "sudo cat /home/docker/cp-test_multinode-401792_multinode-401792-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp testdata/cp-test.txt multinode-401792-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp multinode-401792-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1752522126/001/cp-test_multinode-401792-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp multinode-401792-m02:/home/docker/cp-test.txt multinode-401792:/home/docker/cp-test_multinode-401792-m02_multinode-401792.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792 "sudo cat /home/docker/cp-test_multinode-401792-m02_multinode-401792.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp multinode-401792-m02:/home/docker/cp-test.txt multinode-401792-m03:/home/docker/cp-test_multinode-401792-m02_multinode-401792-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m03 "sudo cat /home/docker/cp-test_multinode-401792-m02_multinode-401792-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp testdata/cp-test.txt multinode-401792-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp multinode-401792-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1752522126/001/cp-test_multinode-401792-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp multinode-401792-m03:/home/docker/cp-test.txt multinode-401792:/home/docker/cp-test_multinode-401792-m03_multinode-401792.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792 "sudo cat /home/docker/cp-test_multinode-401792-m03_multinode-401792.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 cp multinode-401792-m03:/home/docker/cp-test.txt multinode-401792-m02:/home/docker/cp-test_multinode-401792-m03_multinode-401792-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 ssh -n multinode-401792-m02 "sudo cat /home/docker/cp-test_multinode-401792-m03_multinode-401792-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-401792 node stop m03: (1.196644774s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-401792 status: exit status 7 (472.208089ms)

                                                
                                                
-- stdout --
	multinode-401792
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-401792-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-401792-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-401792 status --alsologtostderr: exit status 7 (475.779717ms)

                                                
                                                
-- stdout --
	multinode-401792
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-401792-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-401792-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:24:09.541691  439991 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:24:09.541965  439991 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:24:09.541975  439991 out.go:309] Setting ErrFile to fd 2...
	I1005 20:24:09.541980  439991 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:24:09.542203  439991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	I1005 20:24:09.542370  439991 out.go:303] Setting JSON to false
	I1005 20:24:09.542417  439991 mustload.go:65] Loading cluster: multinode-401792
	I1005 20:24:09.542538  439991 notify.go:220] Checking for updates...
	I1005 20:24:09.542833  439991 config.go:182] Loaded profile config "multinode-401792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:24:09.542850  439991 status.go:255] checking status of multinode-401792 ...
	I1005 20:24:09.543297  439991 cli_runner.go:164] Run: docker container inspect multinode-401792 --format={{.State.Status}}
	I1005 20:24:09.563208  439991 status.go:330] multinode-401792 host status = "Running" (err=<nil>)
	I1005 20:24:09.563263  439991 host.go:66] Checking if "multinode-401792" exists ...
	I1005 20:24:09.563546  439991 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-401792
	I1005 20:24:09.581054  439991 host.go:66] Checking if "multinode-401792" exists ...
	I1005 20:24:09.581384  439991 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:24:09.581454  439991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792
	I1005 20:24:09.598910  439991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792/id_rsa Username:docker}
	I1005 20:24:09.692346  439991 ssh_runner.go:195] Run: systemctl --version
	I1005 20:24:09.696603  439991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:24:09.707712  439991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:24:09.766985  439991 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-10-05 20:24:09.75797215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:24:09.767831  439991 kubeconfig.go:92] found "multinode-401792" server: "https://192.168.58.2:8443"
	I1005 20:24:09.767859  439991 api_server.go:166] Checking apiserver status ...
	I1005 20:24:09.767895  439991 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:24:09.778399  439991 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup
	I1005 20:24:09.787611  439991 api_server.go:182] apiserver freezer: "10:freezer:/docker/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17/crio/crio-897d8bc73ce62f20b2445fc5df49aa23adf8078198bcc7e9ae528cc6bbbf9c3d"
	I1005 20:24:09.787665  439991 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/21605b8de5b4123278fa2f451fe30f7e931e24a24c7c9741ae865e2f0aa92a17/crio/crio-897d8bc73ce62f20b2445fc5df49aa23adf8078198bcc7e9ae528cc6bbbf9c3d/freezer.state
	I1005 20:24:09.795905  439991 api_server.go:204] freezer state: "THAWED"
	I1005 20:24:09.795936  439991 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1005 20:24:09.802600  439991 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1005 20:24:09.802630  439991 status.go:421] multinode-401792 apiserver status = Running (err=<nil>)
	I1005 20:24:09.802640  439991 status.go:257] multinode-401792 status: &{Name:multinode-401792 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1005 20:24:09.802661  439991 status.go:255] checking status of multinode-401792-m02 ...
	I1005 20:24:09.802982  439991 cli_runner.go:164] Run: docker container inspect multinode-401792-m02 --format={{.State.Status}}
	I1005 20:24:09.820291  439991 status.go:330] multinode-401792-m02 host status = "Running" (err=<nil>)
	I1005 20:24:09.820334  439991 host.go:66] Checking if "multinode-401792-m02" exists ...
	I1005 20:24:09.820600  439991 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-401792-m02
	I1005 20:24:09.837676  439991 host.go:66] Checking if "multinode-401792-m02" exists ...
	I1005 20:24:09.837970  439991 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:24:09.838013  439991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-401792-m02
	I1005 20:24:09.855945  439991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17363-334135/.minikube/machines/multinode-401792-m02/id_rsa Username:docker}
	I1005 20:24:09.948178  439991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:24:09.958801  439991 status.go:257] multinode-401792-m02 status: &{Name:multinode-401792-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1005 20:24:09.958850  439991 status.go:255] checking status of multinode-401792-m03 ...
	I1005 20:24:09.959149  439991 cli_runner.go:164] Run: docker container inspect multinode-401792-m03 --format={{.State.Status}}
	I1005 20:24:09.976963  439991 status.go:330] multinode-401792-m03 host status = "Stopped" (err=<nil>)
	I1005 20:24:09.976986  439991 status.go:343] host is not running, skipping remaining checks
	I1005 20:24:09.976993  439991 status.go:257] multinode-401792-m03 status: &{Name:multinode-401792-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 node start m03 --alsologtostderr
E1005 20:24:19.347048  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-401792 node start m03 --alsologtostderr: (11.064600551s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (115.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-401792
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-401792
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-401792: (24.940650696s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-401792 --wait=true -v=8 --alsologtostderr
E1005 20:25:20.648062  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-401792 --wait=true -v=8 --alsologtostderr: (1m30.740483069s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-401792
--- PASS: TestMultiNode/serial/RestartKeepsNodes (115.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-401792 node delete m03: (4.149961009s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 stop
E1005 20:26:42.403019  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:26:43.692533  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-401792 stop: (23.714470392s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-401792 status: exit status 7 (71.952432ms)

                                                
                                                
-- stdout --
	multinode-401792
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-401792-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-401792 status --alsologtostderr: exit status 7 (70.89084ms)

                                                
                                                
-- stdout --
	multinode-401792
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-401792-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:26:46.070589  450257 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:26:46.070691  450257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:26:46.070700  450257 out.go:309] Setting ErrFile to fd 2...
	I1005 20:26:46.070705  450257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:26:46.070896  450257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	I1005 20:26:46.071085  450257 out.go:303] Setting JSON to false
	I1005 20:26:46.071122  450257 mustload.go:65] Loading cluster: multinode-401792
	I1005 20:26:46.071230  450257 notify.go:220] Checking for updates...
	I1005 20:26:46.071465  450257 config.go:182] Loaded profile config "multinode-401792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:26:46.071484  450257 status.go:255] checking status of multinode-401792 ...
	I1005 20:26:46.071835  450257 cli_runner.go:164] Run: docker container inspect multinode-401792 --format={{.State.Status}}
	I1005 20:26:46.088428  450257 status.go:330] multinode-401792 host status = "Stopped" (err=<nil>)
	I1005 20:26:46.088453  450257 status.go:343] host is not running, skipping remaining checks
	I1005 20:26:46.088460  450257 status.go:257] multinode-401792 status: &{Name:multinode-401792 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1005 20:26:46.088484  450257 status.go:255] checking status of multinode-401792-m02 ...
	I1005 20:26:46.089091  450257 cli_runner.go:164] Run: docker container inspect multinode-401792-m02 --format={{.State.Status}}
	I1005 20:26:46.104310  450257 status.go:330] multinode-401792-m02 host status = "Stopped" (err=<nil>)
	I1005 20:26:46.104328  450257 status.go:343] host is not running, skipping remaining checks
	I1005 20:26:46.104333  450257 status.go:257] multinode-401792-m02 status: &{Name:multinode-401792-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-401792 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-401792 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.165573032s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-401792 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-401792
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-401792-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-401792-m02 --driver=docker  --container-runtime=crio: exit status 14 (56.294241ms)

                                                
                                                
-- stdout --
	* [multinode-401792-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-401792-m02' is duplicated with machine name 'multinode-401792-m02' in profile 'multinode-401792'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-401792-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-401792-m03 --driver=docker  --container-runtime=crio: (21.176031122s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-401792
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-401792: exit status 80 (255.473077ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-401792
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-401792-m03 already exists in multinode-401792-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-401792-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-401792-m03: (1.796492003s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.32s)

                                                
                                    
x
+
TestPreload (148.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-203019 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1005 20:28:51.661768  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-203019 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m9.246214028s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-203019 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-203019
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-203019: (5.600327138s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-203019 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1005 20:30:20.648167  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-203019 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m10.6770933s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-203019 image list
helpers_test.go:175: Cleaning up "test-preload-203019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-203019
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-203019: (2.252563514s)
--- PASS: TestPreload (148.73s)

                                                
                                    
x
+
TestScheduledStopUnix (99.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-247817 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-247817 --memory=2048 --driver=docker  --container-runtime=crio: (23.731879503s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-247817 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-247817 -n scheduled-stop-247817
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-247817 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-247817 --cancel-scheduled
E1005 20:31:42.403625  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-247817 -n scheduled-stop-247817
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-247817
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-247817 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-247817
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-247817: exit status 7 (56.299687ms)

                                                
                                                
-- stdout --
	scheduled-stop-247817
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-247817 -n scheduled-stop-247817
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-247817 -n scheduled-stop-247817: exit status 7 (53.867455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-247817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-247817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-247817: (4.983001323s)
--- PASS: TestScheduledStopUnix (99.89s)

                                                
                                    
x
+
TestInsufficientStorage (12.96s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-251003 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-251003 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.699058603s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8afde01a-5943-4099-afa1-f6d4daf0ebbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-251003] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"969a66e0-c80f-46f3-94f7-474cc138c0be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17363"}}
	{"specversion":"1.0","id":"2fbb5a3a-f5ad-4170-ab17-d0a6c4f44485","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ad6d21c9-cef1-40cb-957f-6313975b22fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig"}}
	{"specversion":"1.0","id":"73db65da-b694-474c-b1dc-785b491347b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube"}}
	{"specversion":"1.0","id":"e29e1c0f-0d93-4d1d-973c-a91283456336","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"614f4a36-1d7e-412d-86b7-d4096f5e85ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5356c942-f484-4abf-8783-53e59099395d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5ec1e219-679e-4fdf-bd9d-5bc2d52c67e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"233488a0-4486-4ebe-aa5f-557ff938fa9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbd3d71e-0814-47b5-ab61-1029b2bb1351","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7c967d65-8dbb-4a3c-944b-f3c5f7ed4b4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-251003 in cluster insufficient-storage-251003","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ed4011b-f0a7-433f-ad16-2bd8844b1240","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"08fec2e1-69ac-455e-9e6d-a1f8b6d1ebd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"baeba6a3-fc75-45ec-98ff-17700592dd51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-251003 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-251003 --output=json --layout=cluster: exit status 7 (249.668286ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-251003","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-251003","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1005 20:32:53.176438  471488 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-251003" does not appear in /home/jenkins/minikube-integration/17363-334135/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-251003 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-251003 --output=json --layout=cluster: exit status 7 (238.801808ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-251003","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-251003","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1005 20:32:53.415726  471584 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-251003" does not appear in /home/jenkins/minikube-integration/17363-334135/kubeconfig
	E1005 20:32:53.424856  471584 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/insufficient-storage-251003/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-251003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-251003
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-251003: (1.770715288s)
--- PASS: TestInsufficientStorage (12.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (355.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-204061 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1005 20:35:14.708123  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:35:20.648402  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-204061 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.981742528s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-204061
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-204061: (4.093606696s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-204061 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-204061 status --format={{.Host}}: exit status 7 (66.337797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-204061 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-204061 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.885333423s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-204061 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-204061 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-204061 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (65.93919ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-204061] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-204061
	    minikube start -p kubernetes-upgrade-204061 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2040612 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-204061 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-204061 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1005 20:40:20.648250  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-204061 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.910240791s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-204061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-204061
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-204061: (2.112142915s)
--- PASS: TestKubernetesUpgrade (355.18s)

                                                
                                    
x
+
TestMissingContainerUpgrade (152.9s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.358944909.exe start -p missing-upgrade-739534 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.358944909.exe start -p missing-upgrade-739534 --memory=2200 --driver=docker  --container-runtime=crio: (1m24.710610692s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-739534
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-739534
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-739534 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-739534 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.01469601s)
helpers_test.go:175: Cleaning up "missing-upgrade-739534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-739534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-739534: (2.161686185s)
--- PASS: TestMissingContainerUpgrade (152.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-732676 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-732676 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (63.701477ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-732676] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-732676 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-732676 --driver=docker  --container-runtime=crio: (36.970184446s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-732676 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-732676 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-732676 --no-kubernetes --driver=docker  --container-runtime=crio: (7.395785975s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-732676 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-732676 status -o json: exit status 2 (383.64749ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-732676","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-732676
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-732676: (1.970504297s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-732676 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-732676 --no-kubernetes --driver=docker  --container-runtime=crio: (5.583393728s)
--- PASS: TestNoKubernetes/serial/Start (5.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-732676 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-732676 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.336815ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-732676
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-732676: (1.748141658s)
--- PASS: TestNoKubernetes/serial/Stop (1.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-732676 --driver=docker  --container-runtime=crio
E1005 20:33:51.661837  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-732676 --driver=docker  --container-runtime=crio: (8.952548993s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-732676 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-732676 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.011846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-969365
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-055860 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-055860 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (163.37008ms)

                                                
                                                
-- stdout --
	* [false-055860] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:34:21.185302  498152 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:34:21.185545  498152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:34:21.185554  498152 out.go:309] Setting ErrFile to fd 2...
	I1005 20:34:21.185558  498152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:34:21.185743  498152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-334135/.minikube/bin
	I1005 20:34:21.186265  498152 out.go:303] Setting JSON to false
	I1005 20:34:21.187306  498152 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8190,"bootTime":1696529871,"procs":373,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:34:21.187370  498152 start.go:138] virtualization: kvm guest
	I1005 20:34:21.189106  498152 out.go:177] * [false-055860] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:34:21.190649  498152 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:34:21.191830  498152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:34:21.190696  498152 notify.go:220] Checking for updates...
	I1005 20:34:21.194102  498152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-334135/kubeconfig
	I1005 20:34:21.195603  498152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-334135/.minikube
	I1005 20:34:21.196821  498152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:34:21.198045  498152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:34:21.199815  498152 config.go:182] Loaded profile config "force-systemd-env-683528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 20:34:21.199981  498152 config.go:182] Loaded profile config "missing-upgrade-739534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1005 20:34:21.200108  498152 config.go:182] Loaded profile config "stopped-upgrade-969365": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1005 20:34:21.200209  498152 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:34:21.225574  498152 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:34:21.225666  498152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:34:21.298292  498152 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-05 20:34:21.284738292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:34:21.298390  498152 docker.go:294] overlay module found
	I1005 20:34:21.299999  498152 out.go:177] * Using the docker driver based on user configuration
	I1005 20:34:21.301370  498152 start.go:298] selected driver: docker
	I1005 20:34:21.301380  498152 start.go:902] validating driver "docker" against <nil>
	I1005 20:34:21.301396  498152 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:34:21.303202  498152 out.go:177] 
	W1005 20:34:21.304441  498152 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1005 20:34:21.305657  498152 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-055860 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-055860" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-055860" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt
server: https://127.0.0.1:33228
name: missing-upgrade-739534
contexts:
- context:
cluster: missing-upgrade-739534
user: missing-upgrade-739534
name: missing-upgrade-739534
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-739534
user:
client-certificate: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/missing-upgrade-739534/client.crt
client-key: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/missing-upgrade-739534/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-055860

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055860"

                                                
                                                
----------------------- debugLogs end: false-055860 [took: 2.59196191s] --------------------------------
helpers_test.go:175: Cleaning up "false-055860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-055860
--- PASS: TestNetworkPlugins/group/false (2.90s)

                                                
                                    
x
+
TestPause/serial/Start (76.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-581062 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-581062 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m16.444175105s)
--- PASS: TestPause/serial/Start (76.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-581062 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-581062 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.839895323s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.86s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-581062 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-581062 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-581062 --output=json --layout=cluster: exit status 2 (304.246124ms)

                                                
                                                
-- stdout --
	{"Name":"pause-581062","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-581062","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-581062 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-581062 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.62s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-581062 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-581062 --alsologtostderr -v=5: (2.617653141s)
--- PASS: TestPause/serial/DeletePaused (2.62s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-581062
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-581062: exit status 1 (18.997579ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-581062: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (120.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-392205 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-392205 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m0.738614427s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (120.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-003356 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1005 20:36:42.403582  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-003356 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m0.850547671s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-003356 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [415fed54-a907-4c21-8295-f78ddd265dc0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [415fed54-a907-4c21-8295-f78ddd265dc0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.015024372s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-003356 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-003356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-003356 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-003356 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-003356 --alsologtostderr -v=3: (11.87104467s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-003356 -n no-preload-003356
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-003356 -n no-preload-003356: exit status 7 (60.715856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-003356 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (341s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-003356 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-003356 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m40.678033019s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-003356 -n no-preload-003356
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (341.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-392205 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3bed47b0-a4a6-436f-981f-a339d69e0c3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3bed47b0-a4a6-436f-981f-a339d69e0c3c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.012598423s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-392205 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-392205 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-392205 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-392205 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-392205 --alsologtostderr -v=3: (11.840554253s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-392205 -n old-k8s-version-392205
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-392205 -n old-k8s-version-392205: exit status 7 (56.820213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-392205 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (412.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-392205 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1005 20:38:51.661789  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-392205 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (6m52.165855713s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-392205 -n old-k8s-version-392205
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (412.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-510861 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-510861 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m10.368344745s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-520141 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-520141 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (40.684478364s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-510861 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1a384d20-f3a8-4be8-b110-dafdb6c61953] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1a384d20-f3a8-4be8-b110-dafdb6c61953] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.018474406s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-510861 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-510861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-510861 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-510861 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-510861 --alsologtostderr -v=3: (11.934972012s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-510861 -n embed-certs-510861
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-510861 -n embed-certs-510861: exit status 7 (66.01863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-510861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (346.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-510861 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-510861 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m46.450393278s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-510861 -n embed-certs-510861
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (346.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-520141 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b73fff7f-23ab-461e-aa16-6810db879c6a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b73fff7f-23ab-461e-aa16-6810db879c6a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.01664587s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-520141 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-520141 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-520141 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-520141 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-520141 --alsologtostderr -v=3: (11.975146984s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-520141 -n default-k8s-diff-port-520141
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-520141 -n default-k8s-diff-port-520141: exit status 7 (62.277263ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-520141 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-520141 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1005 20:41:42.403148  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
E1005 20:43:23.693108  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-520141 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m36.713326586s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-520141 -n default-k8s-diff-port-520141
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nzx75" [41e09b6e-91f6-443b-999c-cc65913b7c31] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nzx75" [41e09b6e-91f6-443b-999c-cc65913b7c31] Running
E1005 20:43:51.661760  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.017272063s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nzx75" [41e09b6e-91f6-443b-999c-cc65913b7c31] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010250806s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-003356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-003356 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-003356 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-003356 -n no-preload-003356
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-003356 -n no-preload-003356: exit status 2 (296.587974ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-003356 -n no-preload-003356
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-003356 -n no-preload-003356: exit status 2 (298.155405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-003356 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-003356 -n no-preload-003356
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-003356 -n no-preload-003356
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-733264 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-733264 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (37.732224494s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-733264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-733264 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-733264 --alsologtostderr -v=3: (1.233229875s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-733264 -n newest-cni-733264
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-733264 -n newest-cni-733264: exit status 7 (76.997785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-733264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-733264 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-733264 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (26.36676082s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-733264 -n newest-cni-733264
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-733264 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-733264 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-733264 -n newest-cni-733264
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-733264 -n newest-cni-733264: exit status 2 (301.516525ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-733264 -n newest-cni-733264
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-733264 -n newest-cni-733264: exit status 2 (294.994473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-733264 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-733264 -n newest-cni-733264
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-733264 -n newest-cni-733264
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1005 20:45:20.647462  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/addons-029116/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.199003573s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-5np7n" [78f5ddf5-9a6f-4188-99d0-12eaf4d686a1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015529402s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-5np7n" [78f5ddf5-9a6f-4188-99d0-12eaf4d686a1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008824228s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-392205 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-392205 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-392205 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-392205 -n old-k8s-version-392205
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-392205 -n old-k8s-version-392205: exit status 2 (327.411013ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-392205 -n old-k8s-version-392205
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-392205 -n old-k8s-version-392205: exit status 2 (327.676184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-392205 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-392205 -n old-k8s-version-392205
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-392205 -n old-k8s-version-392205
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-055860 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-055860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lmd5g" [211625f3-328c-47b3-8ea4-1f88a656715e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lmd5g" [211625f3-328c-47b3-8ea4-1f88a656715e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.009920181s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.849070342s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-055860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.792435048s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kgwcp" [a9e4dd1d-089a-4c25-8602-f11f74897e71] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kgwcp" [a9e4dd1d-089a-4c25-8602-f11f74897e71] Running
E1005 20:46:42.402883  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.019090493s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kgwcp" [a9e4dd1d-089a-4c25-8602-f11f74897e71] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009896667s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-510861 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-510861 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-510861 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-510861 -n embed-certs-510861
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-510861 -n embed-certs-510861: exit status 2 (314.738475ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-510861 -n embed-certs-510861
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-510861 -n embed-certs-510861: exit status 2 (333.80646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-510861 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-510861 -n embed-certs-510861
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-510861 -n embed-certs-510861
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m2.286960579s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xskq8" [131a363d-7abc-4c4a-b862-fe4ad209b3c8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xskq8" [131a363d-7abc-4c4a-b862-fe4ad209b3c8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.021072309s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hxlnr" [bb6ab288-08fb-4695-93ac-e10434e8191e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.020241727s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-055860 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-055860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4jr4g" [ca6bd2c4-213a-481c-9f60-019cc318059c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4jr4g" [ca6bd2c4-213a-481c-9f60-019cc318059c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.013164727s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xskq8" [131a363d-7abc-4c4a-b862-fe4ad209b3c8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014074447s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-520141 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-520141 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-520141 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520141 -n default-k8s-diff-port-520141
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520141 -n default-k8s-diff-port-520141: exit status 2 (297.32667ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-520141 -n default-k8s-diff-port-520141
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-520141 -n default-k8s-diff-port-520141: exit status 2 (317.165967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-520141 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520141 -n default-k8s-diff-port-520141
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-520141 -n default-k8s-diff-port-520141
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)
E1005 20:48:34.259487  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/old-k8s-version-392205/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-055860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (40s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1005 20:47:38.024613  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
E1005 20:47:38.029936  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
E1005 20:47:38.040360  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
E1005 20:47:38.060678  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
E1005 20:47:38.101174  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
E1005 20:47:38.181991  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
E1005 20:47:38.342359  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
E1005 20:47:38.662981  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
E1005 20:47:39.304143  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (39.999978529s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (40.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-44mhh" [1163f678-0bc3-409c-aef5-b18047de62b7] Running
E1005 20:47:40.585141  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
E1005 20:47:43.145576  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.023573456s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-055860 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-055860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2j8vn" [74a27631-10da-434f-94fa-29a3323fa5e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2j8vn" [74a27631-10da-434f-94fa-29a3323fa5e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.065245548s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m0.042846707s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-055860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-055860 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-055860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kg78t" [bbd02aa0-b26d-4c6a-92e6-2cea0374b463] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kg78t" [bbd02aa0-b26d-4c6a-92e6-2cea0374b463] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.010443088s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-055860 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-055860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4vstc" [5701b090-7a5d-4ee4-8549-6d20facefc8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4vstc" [5701b090-7a5d-4ee4-8549-6d20facefc8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.01155502s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-055860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1005 20:48:18.987000  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-055860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m18.425900623s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (33.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-055860 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-055860 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.200698284s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-055860 exec deployment/netcat -- nslookup kubernetes.default
E1005 20:48:36.819703  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/old-k8s-version-392205/client.crt: no such file or directory
E1005 20:48:41.940307  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/old-k8s-version-392205/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-055860 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16774s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1005 20:48:51.661667  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/ingress-addon-legacy-540731/client.crt: no such file or directory
E1005 20:48:52.181347  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/old-k8s-version-392205/client.crt: no such file or directory
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-055860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (33.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-p9vj9" [02d98242-7cd6-4a29-885a-68bf7df07498] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.0172423s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-055860 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-055860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6whss" [ed8cb861-2c91-4eed-9199-9b4f50a73182] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6whss" [ed8cb861-2c91-4eed-9199-9b4f50a73182] Running
E1005 20:48:59.947618  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/no-preload-003356/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.010072249s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-055860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-055860 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-055860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mqcxk" [bd5656b6-c81b-435f-86d5-6501e94787dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mqcxk" [bd5656b6-c81b-435f-86d5-6501e94787dc] Running
E1005 20:49:45.450945  340929 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/functional-368978/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.009581372s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-055860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-055860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (24/307)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:496: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-055860 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-055860" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-055860" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt
server: https://127.0.0.1:33228
name: missing-upgrade-739534
contexts:
- context:
cluster: missing-upgrade-739534
user: missing-upgrade-739534
name: missing-upgrade-739534
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-739534
user:
client-certificate: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/missing-upgrade-739534/client.crt
client-key: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/missing-upgrade-739534/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-055860

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055860"

                                                
                                                
----------------------- debugLogs end: kubenet-055860 [took: 3.540183984s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-055860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-055860
--- SKIP: TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-055860 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-055860" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Oct 2023 20:34:24 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-env-683528
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-334135/.minikube/ca.crt
server: https://127.0.0.1:33228
name: missing-upgrade-739534
contexts:
- context:
cluster: force-systemd-env-683528
extensions:
- extension:
last-update: Thu, 05 Oct 2023 20:34:24 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: force-systemd-env-683528
name: force-systemd-env-683528
- context:
cluster: missing-upgrade-739534
user: missing-upgrade-739534
name: missing-upgrade-739534
current-context: force-systemd-env-683528
kind: Config
preferences: {}
users:
- name: force-systemd-env-683528
user:
client-certificate: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/force-systemd-env-683528/client.crt
client-key: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/force-systemd-env-683528/client.key
- name: missing-upgrade-739534
user:
client-certificate: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/missing-upgrade-739534/client.crt
client-key: /home/jenkins/minikube-integration/17363-334135/.minikube/profiles/missing-upgrade-739534/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-055860

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-055860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055860"

                                                
                                                
----------------------- debugLogs end: cilium-055860 [took: 3.249592837s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-055860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-055860
--- SKIP: TestNetworkPlugins/group/cilium (3.46s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-438004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-438004
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard