Test Report: Docker_Linux_crio 17491

                    
                      b9c6c6ec15a37d1e4d613f5544f316161403a793:2023-10-26:31608
                    
                

Test fail (6/308)

Order failed test Duration
28 TestAddons/parallel/Ingress 153.48
29 TestAddons/parallel/InspektorGadget 8.83
159 TestIngressAddonLegacy/serial/ValidateIngressAddons 187.5
209 TestMultiNode/serial/PingHostFrom2Pods 3.53
230 TestRunningBinaryUpgrade 73.43
245 TestStoppedBinaryUpgrade/Upgrade 77.64
x
+
TestAddons/parallel/Ingress (153.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-211632 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-211632 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-211632 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [423e7dea-1b3a-4901-936b-1665d482b775] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [423e7dea-1b3a-4901-936b-1665d482b775] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.085905152s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-211632 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.742894382s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-211632 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-211632 addons disable ingress-dns --alsologtostderr -v=1: (1.208229913s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-211632 addons disable ingress --alsologtostderr -v=1: (7.632453479s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-211632
helpers_test.go:235: (dbg) docker inspect addons-211632:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1",
	        "Created": "2023-10-26T00:54:16.451511291Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16821,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T00:54:16.745965318Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1/hosts",
	        "LogPath": "/var/lib/docker/containers/c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1/c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1-json.log",
	        "Name": "/addons-211632",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-211632:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-211632",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d5f8f2ff13ef010d1eeb0dcf7693776bdfa0d2948114563624701a77b5421ecd-init/diff:/var/lib/docker/overlay2/007d7e88bd091d08c1a177e3000477192ad6785f5c636023d34df0777872a721/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5f8f2ff13ef010d1eeb0dcf7693776bdfa0d2948114563624701a77b5421ecd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5f8f2ff13ef010d1eeb0dcf7693776bdfa0d2948114563624701a77b5421ecd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5f8f2ff13ef010d1eeb0dcf7693776bdfa0d2948114563624701a77b5421ecd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-211632",
	                "Source": "/var/lib/docker/volumes/addons-211632/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-211632",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-211632",
	                "name.minikube.sigs.k8s.io": "addons-211632",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d474e9f4630ed1951b26df644f78270f76beb39a9e3abbc81b1744a46066432",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5d474e9f4630",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-211632": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c35c1efbfb13",
	                        "addons-211632"
	                    ],
	                    "NetworkID": "b957c5cf203521d5b26819ec1325095eba54611228466abbd505078bd4f5873a",
	                    "EndpointID": "59f2864850b537f7432eb1a950e1ba4fbdd9aa7a46b5eb4d2666aa3dc4dce0a2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-211632 -n addons-211632
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-211632 logs -n 25: (1.188348349s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-179503                                                                     | download-only-179503   | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC | 26 Oct 23 00:53 UTC |
	| delete  | -p download-only-179503                                                                     | download-only-179503   | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC | 26 Oct 23 00:53 UTC |
	| start   | --download-only -p                                                                          | download-docker-912806 | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |                     |
	|         | download-docker-912806                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-912806                                                                   | download-docker-912806 | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC | 26 Oct 23 00:53 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-014731   | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |                     |
	|         | binary-mirror-014731                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40063                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-014731                                                                     | binary-mirror-014731   | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC | 26 Oct 23 00:53 UTC |
	| addons  | disable dashboard -p                                                                        | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |                     |
	|         | addons-211632                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |                     |
	|         | addons-211632                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-211632 --wait=true                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC | 26 Oct 23 00:56 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | -p addons-211632                                                                            |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | -p addons-211632                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-211632 addons disable                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-211632 ssh cat                                                                       | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | /opt/local-path-provisioner/pvc-12cb842a-8d18-426c-8f30-ad9da7858417_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-211632 addons disable                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:57 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | addons-211632                                                                               |                        |         |         |                     |                     |
	| ip      | addons-211632 ip                                                                            | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	| addons  | addons-211632 addons disable                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-211632 addons                                                                        | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC |                     |
	|         | addons-211632                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-211632 ssh curl -s                                                                   | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-211632 addons                                                                        | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:57 UTC | 26 Oct 23 00:57 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-211632 addons                                                                        | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:57 UTC | 26 Oct 23 00:57 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-211632 ip                                                                            | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:58 UTC | 26 Oct 23 00:58 UTC |
	| addons  | addons-211632 addons disable                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:58 UTC | 26 Oct 23 00:58 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-211632 addons disable                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:58 UTC | 26 Oct 23 00:59 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/26 00:53:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:53:52.159138   16147 out.go:296] Setting OutFile to fd 1 ...
	I1026 00:53:52.159281   16147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 00:53:52.159291   16147 out.go:309] Setting ErrFile to fd 2...
	I1026 00:53:52.159295   16147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 00:53:52.159469   16147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	I1026 00:53:52.160081   16147 out.go:303] Setting JSON to false
	I1026 00:53:52.160904   16147 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2178,"bootTime":1698279454,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:53:52.160970   16147 start.go:138] virtualization: kvm guest
	I1026 00:53:52.163527   16147 out.go:177] * [addons-211632] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:53:52.165365   16147 out.go:177]   - MINIKUBE_LOCATION=17491
	I1026 00:53:52.165328   16147 notify.go:220] Checking for updates...
	I1026 00:53:52.168739   16147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:53:52.170321   16147 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 00:53:52.172150   16147 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	I1026 00:53:52.174016   16147 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 00:53:52.175520   16147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:53:52.177260   16147 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 00:53:52.199075   16147 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1026 00:53:52.199155   16147 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:53:52.251056   16147 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-10-26 00:53:52.242458271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 00:53:52.251170   16147 docker.go:295] overlay module found
	I1026 00:53:52.253489   16147 out.go:177] * Using the docker driver based on user configuration
	I1026 00:53:52.255231   16147 start.go:298] selected driver: docker
	I1026 00:53:52.255250   16147 start.go:902] validating driver "docker" against <nil>
	I1026 00:53:52.255262   16147 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:53:52.256070   16147 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:53:52.306224   16147 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-10-26 00:53:52.297800414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 00:53:52.306445   16147 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1026 00:53:52.306654   16147 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 00:53:52.308690   16147 out.go:177] * Using Docker driver with root privileges
	I1026 00:53:52.310657   16147 cni.go:84] Creating CNI manager for ""
	I1026 00:53:52.310685   16147 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 00:53:52.310702   16147 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 00:53:52.310718   16147 start_flags.go:323] config:
	{Name:addons-211632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-211632 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 00:53:52.312619   16147 out.go:177] * Starting control plane node addons-211632 in cluster addons-211632
	I1026 00:53:52.314378   16147 cache.go:121] Beginning downloading kic base image for docker with crio
	I1026 00:53:52.316065   16147 out.go:177] * Pulling base image ...
	I1026 00:53:52.317726   16147 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 00:53:52.317779   16147 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1026 00:53:52.317793   16147 cache.go:56] Caching tarball of preloaded images
	I1026 00:53:52.317901   16147 preload.go:174] Found /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 00:53:52.317915   16147 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1026 00:53:52.317895   16147 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1026 00:53:52.318311   16147 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/config.json ...
	I1026 00:53:52.318338   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/config.json: {Name:mk9ebe6d7e171a85ebe7053e9ea40c2a25508f10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:53:52.334000   16147 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1026 00:53:52.334114   16147 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1026 00:53:52.334134   16147 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1026 00:53:52.334140   16147 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1026 00:53:52.334153   16147 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1026 00:53:52.334164   16147 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from local cache
	I1026 00:54:03.408299   16147 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from cached tarball
	I1026 00:54:03.408324   16147 cache.go:194] Successfully downloaded all kic artifacts
	I1026 00:54:03.408352   16147 start.go:365] acquiring machines lock for addons-211632: {Name:mkffd89f32a0bb9cab225acc87f1ded3e2ae28fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:54:03.408445   16147 start.go:369] acquired machines lock for "addons-211632" in 71.984µs
	I1026 00:54:03.408467   16147 start.go:93] Provisioning new machine with config: &{Name:addons-211632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-211632 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:54:03.408564   16147 start.go:125] createHost starting for "" (driver="docker")
	I1026 00:54:03.410739   16147 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1026 00:54:03.410988   16147 start.go:159] libmachine.API.Create for "addons-211632" (driver="docker")
	I1026 00:54:03.411020   16147 client.go:168] LocalClient.Create starting
	I1026 00:54:03.411106   16147 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem
	I1026 00:54:03.566148   16147 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem
	I1026 00:54:03.908085   16147 cli_runner.go:164] Run: docker network inspect addons-211632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 00:54:03.923333   16147 cli_runner.go:211] docker network inspect addons-211632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 00:54:03.923392   16147 network_create.go:281] running [docker network inspect addons-211632] to gather additional debugging logs...
	I1026 00:54:03.923408   16147 cli_runner.go:164] Run: docker network inspect addons-211632
	W1026 00:54:03.937798   16147 cli_runner.go:211] docker network inspect addons-211632 returned with exit code 1
	I1026 00:54:03.937828   16147 network_create.go:284] error running [docker network inspect addons-211632]: docker network inspect addons-211632: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-211632 not found
	I1026 00:54:03.937840   16147 network_create.go:286] output of [docker network inspect addons-211632]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-211632 not found
	
	** /stderr **
	I1026 00:54:03.937931   16147 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 00:54:03.953203   16147 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0027f9da0}
	I1026 00:54:03.953231   16147 network_create.go:124] attempt to create docker network addons-211632 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 00:54:03.953265   16147 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-211632 addons-211632
	I1026 00:54:04.001100   16147 network_create.go:108] docker network addons-211632 192.168.49.0/24 created
	I1026 00:54:04.001143   16147 kic.go:121] calculated static IP "192.168.49.2" for the "addons-211632" container
	I1026 00:54:04.001195   16147 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 00:54:04.015563   16147 cli_runner.go:164] Run: docker volume create addons-211632 --label name.minikube.sigs.k8s.io=addons-211632 --label created_by.minikube.sigs.k8s.io=true
	I1026 00:54:04.031210   16147 oci.go:103] Successfully created a docker volume addons-211632
	I1026 00:54:04.031283   16147 cli_runner.go:164] Run: docker run --rm --name addons-211632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-211632 --entrypoint /usr/bin/test -v addons-211632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1026 00:54:11.257879   16147 cli_runner.go:217] Completed: docker run --rm --name addons-211632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-211632 --entrypoint /usr/bin/test -v addons-211632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (7.226551351s)
	I1026 00:54:11.257912   16147 oci.go:107] Successfully prepared a docker volume addons-211632
	I1026 00:54:11.257933   16147 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 00:54:11.257957   16147 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 00:54:11.258023   16147 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-211632:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 00:54:16.385612   16147 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-211632:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.127512931s)
	I1026 00:54:16.385646   16147 kic.go:203] duration metric: took 5.127687 seconds to extract preloaded images to volume
	W1026 00:54:16.385813   16147 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 00:54:16.385920   16147 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 00:54:16.437604   16147 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-211632 --name addons-211632 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-211632 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-211632 --network addons-211632 --ip 192.168.49.2 --volume addons-211632:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1026 00:54:16.754311   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Running}}
	I1026 00:54:16.772159   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:16.789500   16147 cli_runner.go:164] Run: docker exec addons-211632 stat /var/lib/dpkg/alternatives/iptables
	I1026 00:54:16.855179   16147 oci.go:144] the created container "addons-211632" has a running status.
	I1026 00:54:16.855211   16147 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa...
	I1026 00:54:16.979233   16147 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 00:54:16.999914   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:17.018862   16147 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 00:54:17.018887   16147 kic_runner.go:114] Args: [docker exec --privileged addons-211632 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 00:54:17.087888   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:17.108723   16147 machine.go:88] provisioning docker machine ...
	I1026 00:54:17.108780   16147 ubuntu.go:169] provisioning hostname "addons-211632"
	I1026 00:54:17.108879   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:17.130599   16147 main.go:141] libmachine: Using SSH client type: native
	I1026 00:54:17.131077   16147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1026 00:54:17.131104   16147 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-211632 && echo "addons-211632" | sudo tee /etc/hostname
	I1026 00:54:17.132983   16147 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48492->127.0.0.1:32772: read: connection reset by peer
	I1026 00:54:20.271408   16147 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-211632
	
	I1026 00:54:20.271496   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:20.287517   16147 main.go:141] libmachine: Using SSH client type: native
	I1026 00:54:20.287986   16147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1026 00:54:20.288010   16147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-211632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-211632/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-211632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 00:54:20.405628   16147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 00:54:20.405662   16147 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17491-8444/.minikube CaCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17491-8444/.minikube}
	I1026 00:54:20.405705   16147 ubuntu.go:177] setting up certificates
	I1026 00:54:20.405715   16147 provision.go:83] configureAuth start
	I1026 00:54:20.405761   16147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-211632
	I1026 00:54:20.422562   16147 provision.go:138] copyHostCerts
	I1026 00:54:20.422641   16147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem (1078 bytes)
	I1026 00:54:20.422748   16147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem (1123 bytes)
	I1026 00:54:20.422806   16147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem (1675 bytes)
	I1026 00:54:20.422871   16147 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem org=jenkins.addons-211632 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-211632]
	I1026 00:54:20.630879   16147 provision.go:172] copyRemoteCerts
	I1026 00:54:20.630932   16147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 00:54:20.630970   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:20.648089   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:20.737752   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 00:54:20.759358   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1026 00:54:20.780982   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 00:54:20.801713   16147 provision.go:86] duration metric: configureAuth took 395.974656ms
	I1026 00:54:20.801739   16147 ubuntu.go:193] setting minikube options for container-runtime
	I1026 00:54:20.801936   16147 config.go:182] Loaded profile config "addons-211632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 00:54:20.802042   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:20.818716   16147 main.go:141] libmachine: Using SSH client type: native
	I1026 00:54:20.819051   16147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1026 00:54:20.819077   16147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 00:54:21.024557   16147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 00:54:21.024583   16147 machine.go:91] provisioned docker machine in 3.915827028s
	I1026 00:54:21.024597   16147 client.go:171] LocalClient.Create took 17.613565234s
	I1026 00:54:21.024613   16147 start.go:167] duration metric: libmachine.API.Create for "addons-211632" took 17.613625593s
	I1026 00:54:21.024639   16147 start.go:300] post-start starting for "addons-211632" (driver="docker")
	I1026 00:54:21.024655   16147 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 00:54:21.024706   16147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 00:54:21.024748   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:21.041716   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:21.130200   16147 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 00:54:21.133166   16147 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 00:54:21.133196   16147 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1026 00:54:21.133205   16147 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1026 00:54:21.133212   16147 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1026 00:54:21.133225   16147 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/addons for local assets ...
	I1026 00:54:21.133305   16147 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/files for local assets ...
	I1026 00:54:21.133338   16147 start.go:303] post-start completed in 108.687187ms
	I1026 00:54:21.133658   16147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-211632
	I1026 00:54:21.149968   16147 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/config.json ...
	I1026 00:54:21.150256   16147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 00:54:21.150310   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:21.166359   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:21.254471   16147 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 00:54:21.258527   16147 start.go:128] duration metric: createHost completed in 17.849950332s
	I1026 00:54:21.258554   16147 start.go:83] releasing machines lock for "addons-211632", held for 17.850096768s
	I1026 00:54:21.258619   16147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-211632
	I1026 00:54:21.274461   16147 ssh_runner.go:195] Run: cat /version.json
	I1026 00:54:21.274505   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:21.274532   16147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 00:54:21.274604   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:21.292422   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:21.292851   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:21.469819   16147 ssh_runner.go:195] Run: systemctl --version
	I1026 00:54:21.473748   16147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 00:54:21.609700   16147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 00:54:21.613697   16147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 00:54:21.630672   16147 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1026 00:54:21.630744   16147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 00:54:21.656912   16147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1026 00:54:21.656930   16147 start.go:472] detecting cgroup driver to use...
	I1026 00:54:21.656959   16147 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1026 00:54:21.657001   16147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 00:54:21.670361   16147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 00:54:21.680381   16147 docker.go:198] disabling cri-docker service (if available) ...
	I1026 00:54:21.680434   16147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 00:54:21.692359   16147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 00:54:21.704851   16147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 00:54:21.777801   16147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 00:54:21.861277   16147 docker.go:214] disabling docker service ...
	I1026 00:54:21.861341   16147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 00:54:21.877820   16147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 00:54:21.887713   16147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 00:54:21.964385   16147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 00:54:22.041917   16147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 00:54:22.051521   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 00:54:22.064776   16147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 00:54:22.064830   16147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:54:22.072920   16147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 00:54:22.072990   16147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:54:22.081236   16147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:54:22.089375   16147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:54:22.097702   16147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 00:54:22.105324   16147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 00:54:22.112591   16147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 00:54:22.119960   16147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 00:54:22.192893   16147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 00:54:22.287207   16147 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 00:54:22.287268   16147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 00:54:22.290323   16147 start.go:540] Will wait 60s for crictl version
	I1026 00:54:22.290368   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:54:22.293169   16147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 00:54:22.323730   16147 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1026 00:54:22.323822   16147 ssh_runner.go:195] Run: crio --version
	I1026 00:54:22.358001   16147 ssh_runner.go:195] Run: crio --version
	I1026 00:54:22.391795   16147 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1026 00:54:22.393102   16147 cli_runner.go:164] Run: docker network inspect addons-211632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 00:54:22.408233   16147 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 00:54:22.411487   16147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 00:54:22.421221   16147 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 00:54:22.421281   16147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 00:54:22.473416   16147 crio.go:496] all images are preloaded for cri-o runtime.
	I1026 00:54:22.473438   16147 crio.go:415] Images already preloaded, skipping extraction
	I1026 00:54:22.473485   16147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 00:54:22.505431   16147 crio.go:496] all images are preloaded for cri-o runtime.
	I1026 00:54:22.505452   16147 cache_images.go:84] Images are preloaded, skipping loading
	I1026 00:54:22.505503   16147 ssh_runner.go:195] Run: crio config
	I1026 00:54:22.546519   16147 cni.go:84] Creating CNI manager for ""
	I1026 00:54:22.546538   16147 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 00:54:22.546556   16147 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1026 00:54:22.546574   16147 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-211632 NodeName:addons-211632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 00:54:22.546730   16147 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-211632"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 00:54:22.546791   16147 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-211632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-211632 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1026 00:54:22.546850   16147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1026 00:54:22.554704   16147 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 00:54:22.554772   16147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 00:54:22.562102   16147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1026 00:54:22.577028   16147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 00:54:22.592204   16147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1026 00:54:22.607512   16147 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 00:54:22.610502   16147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 00:54:22.620217   16147 certs.go:56] Setting up /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632 for IP: 192.168.49.2
	I1026 00:54:22.620256   16147 certs.go:190] acquiring lock for shared ca certs: {Name:mk5c45c423cc5a6761a0ccf5b25a0c8b531fe271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.620389   16147 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key
	I1026 00:54:22.679611   16147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt ...
	I1026 00:54:22.679639   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt: {Name:mk2276d3b00ed6731a6512cf41e99b72143bec5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.679822   16147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key ...
	I1026 00:54:22.679837   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key: {Name:mkffdebe349966b741a3a7f33073ebaa3f212967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.679930   16147 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key
	I1026 00:54:22.854803   16147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt ...
	I1026 00:54:22.854832   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt: {Name:mk6cdb0cf01b90dfd65a171999802d0e49391e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.855006   16147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key ...
	I1026 00:54:22.855024   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key: {Name:mkd5f6bb5f1850cfd0aa58b7ac491a1c9abef6c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.855155   16147 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.key
	I1026 00:54:22.855176   16147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt with IP's: []
	I1026 00:54:22.936275   16147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt ...
	I1026 00:54:22.936313   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: {Name:mkfafa2f462e4f8bcccc960086af046d3433937e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.936506   16147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.key ...
	I1026 00:54:22.936523   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.key: {Name:mk0b1b74286262dd198bd82f49ce42289d234d1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.936611   16147 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key.dd3b5fb2
	I1026 00:54:22.936633   16147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1026 00:54:23.136175   16147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt.dd3b5fb2 ...
	I1026 00:54:23.136207   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt.dd3b5fb2: {Name:mk6a266751544189db1c6ee27b8593b320cc7c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:23.136384   16147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key.dd3b5fb2 ...
	I1026 00:54:23.136401   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key.dd3b5fb2: {Name:mk34b1c3bd7489b3f5fc9661bb8d9662105dca11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:23.136495   16147 certs.go:337] copying /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt
	I1026 00:54:23.136594   16147 certs.go:341] copying /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key
	I1026 00:54:23.136663   16147 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.key
	I1026 00:54:23.136686   16147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.crt with IP's: []
	I1026 00:54:23.327259   16147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.crt ...
	I1026 00:54:23.327293   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.crt: {Name:mk0a5db21acd4ad77bf4b4b7939dae3a538fd59a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:23.327465   16147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.key ...
	I1026 00:54:23.327481   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.key: {Name:mk94a40f616d1c51d423530177e2f1a80764a1f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:23.327688   16147 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 00:54:23.327723   16147 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem (1078 bytes)
	I1026 00:54:23.327747   16147 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem (1123 bytes)
	I1026 00:54:23.327782   16147 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem (1675 bytes)
	I1026 00:54:23.328456   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1026 00:54:23.349801   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 00:54:23.370163   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 00:54:23.391403   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 00:54:23.413055   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 00:54:23.435385   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 00:54:23.456894   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 00:54:23.476970   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1026 00:54:23.496645   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 00:54:23.516687   16147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 00:54:23.531250   16147 ssh_runner.go:195] Run: openssl version
	I1026 00:54:23.536060   16147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 00:54:23.543931   16147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:54:23.546914   16147 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:54 /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:54:23.546975   16147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:54:23.552747   16147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 00:54:23.560630   16147 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1026 00:54:23.563477   16147 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1026 00:54:23.563528   16147 kubeadm.go:404] StartCluster: {Name:addons-211632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-211632 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 00:54:23.563612   16147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 00:54:23.563646   16147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 00:54:23.595381   16147 cri.go:89] found id: ""
	I1026 00:54:23.595445   16147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 00:54:23.603239   16147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 00:54:23.610804   16147 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1026 00:54:23.610870   16147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 00:54:23.618264   16147 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 00:54:23.618308   16147 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 00:54:23.658523   16147 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1026 00:54:23.658766   16147 kubeadm.go:322] [preflight] Running pre-flight checks
	I1026 00:54:23.691518   16147 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1026 00:54:23.691612   16147 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-gcp
	I1026 00:54:23.691662   16147 kubeadm.go:322] OS: Linux
	I1026 00:54:23.691733   16147 kubeadm.go:322] CGROUPS_CPU: enabled
	I1026 00:54:23.691790   16147 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1026 00:54:23.691842   16147 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1026 00:54:23.691882   16147 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1026 00:54:23.691922   16147 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1026 00:54:23.691990   16147 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1026 00:54:23.692058   16147 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1026 00:54:23.692123   16147 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1026 00:54:23.692188   16147 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1026 00:54:23.751329   16147 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 00:54:23.751467   16147 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 00:54:23.751548   16147 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 00:54:23.947501   16147 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 00:54:23.950872   16147 out.go:204]   - Generating certificates and keys ...
	I1026 00:54:23.951061   16147 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1026 00:54:23.951177   16147 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1026 00:54:24.180825   16147 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 00:54:24.315262   16147 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1026 00:54:24.487487   16147 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1026 00:54:24.750022   16147 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1026 00:54:24.912402   16147 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1026 00:54:24.912578   16147 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-211632 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 00:54:25.263176   16147 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1026 00:54:25.263330   16147 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-211632 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 00:54:25.584508   16147 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 00:54:25.895008   16147 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 00:54:26.181187   16147 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1026 00:54:26.181321   16147 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 00:54:26.261256   16147 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 00:54:26.599612   16147 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 00:54:26.764284   16147 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 00:54:26.877979   16147 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 00:54:26.878412   16147 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 00:54:26.881295   16147 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 00:54:26.883643   16147 out.go:204]   - Booting up control plane ...
	I1026 00:54:26.883801   16147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 00:54:26.883922   16147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 00:54:26.884026   16147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 00:54:26.891579   16147 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 00:54:26.892321   16147 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 00:54:26.892414   16147 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1026 00:54:26.970752   16147 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 00:54:31.972630   16147 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001966 seconds
	I1026 00:54:31.972736   16147 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 00:54:31.983523   16147 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 00:54:32.503420   16147 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 00:54:32.503636   16147 kubeadm.go:322] [mark-control-plane] Marking the node addons-211632 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 00:54:33.012570   16147 kubeadm.go:322] [bootstrap-token] Using token: iibgbk.7hhnwxs03oqbvbv8
	I1026 00:54:33.014171   16147 out.go:204]   - Configuring RBAC rules ...
	I1026 00:54:33.014333   16147 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 00:54:33.019026   16147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 00:54:33.025603   16147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 00:54:33.028597   16147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 00:54:33.031072   16147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 00:54:33.033679   16147 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 00:54:33.046028   16147 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 00:54:33.262809   16147 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1026 00:54:33.422896   16147 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1026 00:54:33.423722   16147 kubeadm.go:322] 
	I1026 00:54:33.423843   16147 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1026 00:54:33.423865   16147 kubeadm.go:322] 
	I1026 00:54:33.423981   16147 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1026 00:54:33.423991   16147 kubeadm.go:322] 
	I1026 00:54:33.424036   16147 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1026 00:54:33.424122   16147 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 00:54:33.424200   16147 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 00:54:33.424211   16147 kubeadm.go:322] 
	I1026 00:54:33.424284   16147 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1026 00:54:33.424311   16147 kubeadm.go:322] 
	I1026 00:54:33.424401   16147 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 00:54:33.424423   16147 kubeadm.go:322] 
	I1026 00:54:33.424500   16147 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1026 00:54:33.424626   16147 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 00:54:33.424735   16147 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 00:54:33.424743   16147 kubeadm.go:322] 
	I1026 00:54:33.424842   16147 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 00:54:33.424940   16147 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1026 00:54:33.424954   16147 kubeadm.go:322] 
	I1026 00:54:33.425087   16147 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token iibgbk.7hhnwxs03oqbvbv8 \
	I1026 00:54:33.425226   16147 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa \
	I1026 00:54:33.425259   16147 kubeadm.go:322] 	--control-plane 
	I1026 00:54:33.425269   16147 kubeadm.go:322] 
	I1026 00:54:33.425376   16147 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1026 00:54:33.425386   16147 kubeadm.go:322] 
	I1026 00:54:33.425494   16147 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token iibgbk.7hhnwxs03oqbvbv8 \
	I1026 00:54:33.425633   16147 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa 
	I1026 00:54:33.427294   16147 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1026 00:54:33.427431   16147 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 00:54:33.427454   16147 cni.go:84] Creating CNI manager for ""
	I1026 00:54:33.427461   16147 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 00:54:33.429382   16147 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1026 00:54:33.430925   16147 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 00:54:33.434984   16147 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1026 00:54:33.435004   16147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1026 00:54:33.451183   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 00:54:34.094507   16147 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 00:54:34.094614   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:34.094643   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=af1d352f1030f8f3ea7f97e311e7fe82ef319942 minikube.k8s.io/name=addons-211632 minikube.k8s.io/updated_at=2023_10_26T00_54_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:34.198101   16147 ops.go:34] apiserver oom_adj: -16
	I1026 00:54:34.198256   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:34.260040   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:34.828312   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:35.328638   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:35.828253   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:36.328493   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:36.827832   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:37.328476   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:37.828428   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:38.328049   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:38.827755   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:39.328192   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:39.828360   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:40.327769   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:40.828643   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:41.327802   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:41.828030   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:42.327733   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:42.828231   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:43.328143   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:43.828747   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:44.328473   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:44.828646   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:45.328484   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:45.828201   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:46.328477   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:46.398759   16147 kubeadm.go:1081] duration metric: took 12.304197478s to wait for elevateKubeSystemPrivileges.
	I1026 00:54:46.398795   16147 kubeadm.go:406] StartCluster complete in 22.835270292s
	I1026 00:54:46.398817   16147 settings.go:142] acquiring lock: {Name:mk3f6a6b512050e15c823ee035bfa16b068e5bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:46.398933   16147 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 00:54:46.399564   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/kubeconfig: {Name:mkd7fc4e7a7060baa25a329208944605474cc380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:46.399796   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 00:54:46.399876   16147 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1026 00:54:46.400030   16147 addons.go:69] Setting volumesnapshots=true in profile "addons-211632"
	I1026 00:54:46.400039   16147 addons.go:69] Setting ingress-dns=true in profile "addons-211632"
	I1026 00:54:46.400058   16147 addons.go:231] Setting addon volumesnapshots=true in "addons-211632"
	I1026 00:54:46.400056   16147 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-211632"
	I1026 00:54:46.400071   16147 addons.go:69] Setting gcp-auth=true in profile "addons-211632"
	I1026 00:54:46.400091   16147 mustload.go:65] Loading cluster: addons-211632
	I1026 00:54:46.400108   16147 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-211632"
	I1026 00:54:46.400113   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.400108   16147 addons.go:69] Setting default-storageclass=true in profile "addons-211632"
	I1026 00:54:46.400140   16147 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-211632"
	I1026 00:54:46.400159   16147 addons.go:69] Setting cloud-spanner=true in profile "addons-211632"
	I1026 00:54:46.400657   16147 addons.go:231] Setting addon cloud-spanner=true in "addons-211632"
	I1026 00:54:46.400730   16147 config.go:182] Loaded profile config "addons-211632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 00:54:46.400763   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.400630   16147 config.go:182] Loaded profile config "addons-211632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 00:54:46.400804   16147 addons.go:69] Setting helm-tiller=true in profile "addons-211632"
	I1026 00:54:46.400826   16147 addons.go:231] Setting addon helm-tiller=true in "addons-211632"
	I1026 00:54:46.400853   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.401105   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.401125   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.401194   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.401511   16147 addons.go:69] Setting ingress=true in profile "addons-211632"
	I1026 00:54:46.401547   16147 addons.go:231] Setting addon ingress=true in "addons-211632"
	I1026 00:54:46.401627   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.402205   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.400062   16147 addons.go:231] Setting addon ingress-dns=true in "addons-211632"
	I1026 00:54:46.403021   16147 addons.go:69] Setting inspektor-gadget=true in profile "addons-211632"
	I1026 00:54:46.403036   16147 addons.go:231] Setting addon inspektor-gadget=true in "addons-211632"
	I1026 00:54:46.403089   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.403186   16147 addons.go:69] Setting metrics-server=true in profile "addons-211632"
	I1026 00:54:46.403197   16147 addons.go:231] Setting addon metrics-server=true in "addons-211632"
	I1026 00:54:46.403228   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.403529   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.404197   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.404220   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.404428   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.404958   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.400780   16147 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-211632"
	I1026 00:54:46.405261   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.405280   16147 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-211632"
	I1026 00:54:46.405807   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.406767   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.407114   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.407404   16147 addons.go:69] Setting registry=true in profile "addons-211632"
	I1026 00:54:46.407450   16147 addons.go:231] Setting addon registry=true in "addons-211632"
	I1026 00:54:46.407501   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.408000   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.408379   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.410837   16147 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-211632"
	I1026 00:54:46.410916   16147 addons.go:69] Setting storage-provisioner=true in profile "addons-211632"
	I1026 00:54:46.410953   16147 addons.go:231] Setting addon storage-provisioner=true in "addons-211632"
	I1026 00:54:46.411024   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.410865   16147 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-211632"
	I1026 00:54:46.427811   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.428442   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.439148   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.453651   16147 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1026 00:54:46.455473   16147 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1026 00:54:46.455493   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1026 00:54:46.455627   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.458993   16147 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1026 00:54:46.460341   16147 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 00:54:46.460374   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 00:54:46.460334   16147 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1026 00:54:46.461806   16147 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1026 00:54:46.461828   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1026 00:54:46.461873   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.460350   16147 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1026 00:54:46.460440   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.465736   16147 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1026 00:54:46.463767   16147 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 00:54:46.464386   16147 addons.go:231] Setting addon default-storageclass=true in "addons-211632"
	I1026 00:54:46.468469   16147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1026 00:54:46.467227   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 00:54:46.467274   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.467318   16147 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 00:54:46.470320   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1026 00:54:46.470392   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.470589   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.473190   16147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.3
	I1026 00:54:46.472343   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.472581   16147 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-211632"
	I1026 00:54:46.478500   16147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1026 00:54:46.478541   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.481542   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 00:54:46.480301   16147 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 00:54:46.480643   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.484108   16147 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1026 00:54:46.482984   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1026 00:54:46.484188   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.486533   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 00:54:46.488425   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 00:54:46.488388   16147 out.go:177]   - Using image docker.io/registry:2.8.3
	I1026 00:54:46.488304   16147 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1026 00:54:46.489986   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 00:54:46.490076   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 00:54:46.490083   16147 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 00:54:46.491469   16147 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-211632" context rescaled to 1 replicas
	I1026 00:54:46.492458   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 00:54:46.493298   16147 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:54:46.493352   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.494817   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 00:54:46.495257   16147 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 00:54:46.494867   16147 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1026 00:54:46.498490   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 00:54:46.496911   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 00:54:46.496971   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 00:54:46.496977   16147 out.go:177] * Verifying Kubernetes components...
	I1026 00:54:46.497181   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.500997   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 00:54:46.501052   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.501222   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.503033   16147 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 00:54:46.507121   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1026 00:54:46.507177   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.513076   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.513386   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 00:54:46.513437   16147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 00:54:46.515027   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 00:54:46.513440   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.517744   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.519004   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 00:54:46.519031   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 00:54:46.519105   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.523152   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.529352   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.542466   16147 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 00:54:46.542491   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 00:54:46.542547   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.546287   16147 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 00:54:46.548031   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.550889   16147 out.go:177]   - Using image docker.io/busybox:stable
	I1026 00:54:46.552539   16147 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 00:54:46.552560   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 00:54:46.552643   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.559993   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.560217   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.561796   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.562032   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.570340   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.574736   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	W1026 00:54:46.594081   16147 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 00:54:46.594115   16147 retry.go:31] will retry after 250.611729ms: ssh: handshake failed: EOF
	I1026 00:54:46.700596   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 00:54:46.701445   16147 node_ready.go:35] waiting up to 6m0s for node "addons-211632" to be "Ready" ...
	I1026 00:54:46.822006   16147 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1026 00:54:46.822033   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1026 00:54:46.899674   16147 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1026 00:54:46.899698   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1026 00:54:46.902140   16147 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 00:54:46.902163   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 00:54:46.910883   16147 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1026 00:54:46.910912   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1026 00:54:46.998400   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 00:54:47.006998   16147 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 00:54:47.007025   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 00:54:47.008563   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 00:54:47.015141   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 00:54:47.015168   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 00:54:47.092963   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 00:54:47.094088   16147 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 00:54:47.094139   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 00:54:47.101966   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 00:54:47.104583   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 00:54:47.105396   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1026 00:54:47.110740   16147 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1026 00:54:47.110802   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1026 00:54:47.190329   16147 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 00:54:47.190404   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 00:54:47.197067   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 00:54:47.307152   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 00:54:47.307180   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 00:54:47.310633   16147 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 00:54:47.310658   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 00:54:47.313347   16147 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 00:54:47.313374   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 00:54:47.391419   16147 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1026 00:54:47.391453   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1026 00:54:47.398825   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 00:54:47.407253   16147 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 00:54:47.407353   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 00:54:47.701204   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 00:54:47.701233   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 00:54:47.704878   16147 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 00:54:47.704907   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 00:54:47.707528   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 00:54:47.712859   16147 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1026 00:54:47.712882   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1026 00:54:47.991311   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 00:54:47.991393   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 00:54:48.007259   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 00:54:48.091233   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 00:54:48.091334   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 00:54:48.111028   16147 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1026 00:54:48.111118   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1026 00:54:48.311199   16147 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:54:48.311304   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 00:54:48.499749   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 00:54:48.499785   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 00:54:48.597971   16147 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 00:54:48.598010   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1026 00:54:48.612560   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:54:48.695179   16147 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.994537314s)
	I1026 00:54:48.695355   16147 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 00:54:48.896979   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:54:48.994257   16147 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1026 00:54:48.994281   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1026 00:54:49.007770   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 00:54:49.007795   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 00:54:49.305489   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1026 00:54:49.697581   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.699138405s)
	I1026 00:54:49.801589   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 00:54:49.801666   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 00:54:50.190441   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 00:54:50.190534   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 00:54:50.309908   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 00:54:50.309984   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 00:54:50.512804   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 00:54:50.512914   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 00:54:50.802735   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 00:54:51.413345   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:54:52.812931   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.804324112s)
	I1026 00:54:52.812967   16147 addons.go:467] Verifying addon ingress=true in "addons-211632"
	I1026 00:54:52.813007   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.719955426s)
	I1026 00:54:52.814554   16147 out.go:177] * Verifying ingress addon...
	I1026 00:54:52.813111   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.711052672s)
	I1026 00:54:52.813147   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.70849005s)
	I1026 00:54:52.813221   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.70775796s)
	I1026 00:54:52.813282   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.616135176s)
	I1026 00:54:52.813338   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.414472532s)
	I1026 00:54:52.813406   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.105840136s)
	I1026 00:54:52.813439   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.806108693s)
	I1026 00:54:52.813553   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.200964512s)
	I1026 00:54:52.813628   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.508104775s)
	I1026 00:54:52.815922   16147 addons.go:467] Verifying addon metrics-server=true in "addons-211632"
	I1026 00:54:52.815922   16147 addons.go:467] Verifying addon registry=true in "addons-211632"
	W1026 00:54:52.815936   16147 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 00:54:52.815957   16147 retry.go:31] will retry after 321.977796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 00:54:52.817494   16147 out.go:177] * Verifying registry addon...
	I1026 00:54:52.816690   16147 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 00:54:52.819694   16147 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 00:54:52.823744   16147 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 00:54:52.823761   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 00:54:52.826216   16147 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1026 00:54:52.827894   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:52.828304   16147 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 00:54:52.828371   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:52.893743   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:53.138254   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:54:53.245557   16147 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 00:54:53.245621   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:53.263481   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:53.332891   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:53.397983   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:53.408719   16147 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 00:54:53.490341   16147 addons.go:231] Setting addon gcp-auth=true in "addons-211632"
	I1026 00:54:53.490412   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:53.490937   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:53.519262   16147 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 00:54:53.519314   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:53.536264   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:53.621272   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.818467522s)
	I1026 00:54:53.621316   16147 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-211632"
	I1026 00:54:53.623201   16147 out.go:177] * Verifying csi-hostpath-driver addon...
	I1026 00:54:53.625373   16147 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 00:54:53.630362   16147 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 00:54:53.630384   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:53.633623   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:53.802069   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:54:53.832718   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:53.897962   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:54.105960   16147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1026 00:54:54.107537   16147 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1026 00:54:54.108953   16147 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 00:54:54.108970   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 00:54:54.124997   16147 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 00:54:54.125022   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 00:54:54.137590   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:54.140812   16147 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 00:54:54.140829   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1026 00:54:54.156250   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 00:54:54.332906   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:54.398605   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:54.696542   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:54.894050   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:54.898087   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:55.194814   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:55.393454   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:55.398261   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:55.694651   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:55.796916   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.640624765s)
	I1026 00:54:55.797902   16147 addons.go:467] Verifying addon gcp-auth=true in "addons-211632"
	I1026 00:54:55.800566   16147 out.go:177] * Verifying gcp-auth addon...
	I1026 00:54:55.802886   16147 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 00:54:55.805688   16147 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 00:54:55.805705   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:55.808077   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:55.892015   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:55.897841   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:56.137728   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:56.301742   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:54:56.312085   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:56.332333   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:56.398781   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:56.692424   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:56.812350   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:56.892989   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:56.898288   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:57.138395   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:57.312237   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:57.332105   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:57.398338   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:57.638618   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:57.811378   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:57.832801   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:57.899627   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:58.138807   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:58.301942   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:54:58.312087   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:58.332335   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:58.398033   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:58.637186   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:58.811318   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:58.832608   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:58.897539   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:59.138787   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:59.311402   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:59.332412   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:59.397230   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:59.637867   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:59.811461   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:59.832489   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:59.897091   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:00.138379   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:00.311246   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:00.332607   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:00.397252   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:00.637815   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:00.801200   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:00.811573   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:00.831570   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:00.897916   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:01.137335   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:01.311351   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:01.332360   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:01.397540   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:01.638102   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:01.811888   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:01.831894   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:01.897659   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:02.137317   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:02.311118   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:02.332167   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:02.397947   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:02.637217   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:02.810730   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:02.831773   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:02.897626   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:03.138633   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:03.301293   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:03.311646   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:03.331768   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:03.397578   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:03.640070   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:03.810823   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:03.831902   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:03.897622   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:04.137395   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:04.311421   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:04.332751   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:04.397978   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:04.637544   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:04.811304   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:04.832240   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:04.897017   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:05.137445   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:05.311192   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:05.332178   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:05.398220   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:05.637640   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:05.801406   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:05.810930   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:05.831995   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:05.897814   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:06.137379   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:06.311571   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:06.332776   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:06.397712   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:06.637219   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:06.811022   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:06.832048   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:06.897839   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:07.137650   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:07.311959   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:07.332019   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:07.397749   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:07.638883   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:07.811659   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:07.831730   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:07.897414   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:08.138052   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:08.301508   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:08.311211   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:08.332136   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:08.397958   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:08.637341   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:08.811293   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:08.832248   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:08.898001   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:09.137772   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:09.311543   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:09.331723   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:09.397759   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:09.638217   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:09.811080   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:09.832350   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:09.898271   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:10.138007   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:10.311079   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:10.332104   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:10.398064   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:10.637639   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:10.801220   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:10.811518   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:10.832369   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:10.897779   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:11.137144   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:11.311862   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:11.331978   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:11.397705   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:11.638148   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:11.811483   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:11.832891   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:11.897426   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:12.138071   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:12.311499   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:12.332709   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:12.397618   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:12.638731   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:12.811337   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:12.832432   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:12.897535   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:13.138141   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:13.300526   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:13.311162   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:13.332261   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:13.399945   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:13.637483   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:13.811424   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:13.832557   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:13.897384   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:14.138055   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:14.310987   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:14.332053   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:14.398183   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:14.637753   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:14.811756   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:14.831848   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:14.897977   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:15.137589   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:15.301206   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:15.312043   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:15.332219   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:15.398143   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:15.637780   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:15.811365   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:15.832645   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:15.897397   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:16.138073   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:16.310692   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:16.331726   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:16.397819   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:16.637503   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:16.811808   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:16.832043   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:16.897593   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:17.138276   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:17.311720   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:17.332445   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:17.398306   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:17.638481   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:17.800936   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:17.811485   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:17.832922   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:17.897845   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:18.137239   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:18.311244   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:18.332446   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:18.397345   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:18.638108   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:18.811330   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:18.832467   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:18.897253   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:19.137691   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:19.311568   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:19.331476   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:19.397393   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:19.637988   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:19.801619   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:19.811089   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:19.832271   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:19.898200   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:20.137776   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:20.311744   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:20.331700   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:20.397821   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:20.637425   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:20.811679   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:20.831639   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:20.900816   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:21.195069   16147 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 00:55:21.195099   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:21.301238   16147 node_ready.go:49] node "addons-211632" has status "Ready":"True"
	I1026 00:55:21.301263   16147 node_ready.go:38] duration metric: took 34.599788711s waiting for node "addons-211632" to be "Ready" ...
	I1026 00:55:21.301274   16147 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 00:55:21.310941   16147 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-htzfl" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:21.312856   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:21.331866   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:21.399119   16147 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 00:55:21.399147   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:21.640941   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:21.812077   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:21.832726   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:21.897938   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:22.139040   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:22.311931   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:22.333159   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:22.398611   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:22.639986   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:22.813321   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:22.826882   16147 pod_ready.go:92] pod "coredns-5dd5756b68-htzfl" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:22.826907   16147 pod_ready.go:81] duration metric: took 1.515937118s waiting for pod "coredns-5dd5756b68-htzfl" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.826928   16147 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.832597   16147 pod_ready.go:92] pod "etcd-addons-211632" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:22.832678   16147 pod_ready.go:81] duration metric: took 5.741638ms waiting for pod "etcd-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.832709   16147 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.833175   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:22.895318   16147 pod_ready.go:92] pod "kube-apiserver-addons-211632" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:22.895350   16147 pod_ready.go:81] duration metric: took 62.621257ms waiting for pod "kube-apiserver-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.895366   16147 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.899469   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:22.901020   16147 pod_ready.go:92] pod "kube-controller-manager-addons-211632" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:22.901044   16147 pod_ready.go:81] duration metric: took 5.668968ms waiting for pod "kube-controller-manager-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.901059   16147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5xv7d" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:23.138745   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:23.301178   16147 pod_ready.go:92] pod "kube-proxy-5xv7d" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:23.301200   16147 pod_ready.go:81] duration metric: took 400.133692ms waiting for pod "kube-proxy-5xv7d" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:23.301209   16147 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:23.311700   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:23.332413   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:23.398389   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:23.638555   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:23.701350   16147 pod_ready.go:92] pod "kube-scheduler-addons-211632" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:23.701377   16147 pod_ready.go:81] duration metric: took 400.16015ms waiting for pod "kube-scheduler-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:23.701392   16147 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-8pc98" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:23.813985   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:23.892569   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:23.898988   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:24.197241   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:24.311651   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:24.333393   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:24.399425   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:24.694368   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:24.812429   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:24.832703   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:24.898526   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:25.138974   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:25.311412   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:25.333455   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:25.398632   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:25.640949   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:25.815110   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:25.832943   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:25.898684   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:26.008051   16147 pod_ready.go:102] pod "metrics-server-7c66d45ddc-8pc98" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:26.138602   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:26.311333   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:26.332842   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:26.398264   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:26.639245   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:26.812338   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:26.832926   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:26.898458   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:27.139560   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:27.310986   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:27.333320   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:27.398691   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:27.640146   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:27.830440   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:27.833475   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:27.898550   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:28.138628   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:28.311950   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:28.332315   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:28.398881   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:28.507265   16147 pod_ready.go:102] pod "metrics-server-7c66d45ddc-8pc98" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:28.638573   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:28.811933   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:28.833052   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:28.899387   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:29.008903   16147 pod_ready.go:92] pod "metrics-server-7c66d45ddc-8pc98" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:29.008926   16147 pod_ready.go:81] duration metric: took 5.307527561s waiting for pod "metrics-server-7c66d45ddc-8pc98" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:29.008950   16147 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:29.138615   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:29.311985   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:29.332321   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:29.398799   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:29.639371   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:29.811379   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:29.832950   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:29.898754   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:30.138543   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:30.312497   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:30.333758   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:30.399177   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:30.639579   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:30.811622   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:30.832523   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:30.899016   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:31.025835   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:31.139661   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:31.312091   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:31.333036   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:31.398311   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:31.638833   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:31.810995   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:31.832087   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:31.898844   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:32.138111   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:32.312071   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:32.332314   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:32.399345   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:32.639599   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:32.811766   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:32.832311   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:32.898190   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:33.138991   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:33.311458   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:33.332519   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:33.400104   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:33.525662   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:33.638919   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:33.811209   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:33.832685   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:33.898628   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:34.138901   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:34.311374   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:34.333065   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:34.398636   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:34.638916   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:34.811312   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:34.832735   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:34.898073   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:35.137943   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:35.310924   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:35.332632   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:35.397939   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:35.527330   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:35.698915   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:35.812215   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:35.893696   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:35.898421   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:36.190973   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:36.312828   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:36.332584   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:36.398209   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:36.639072   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:36.811979   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:36.832321   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:36.899092   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:37.138522   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:37.311660   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:37.331940   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:37.397951   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:37.692348   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:37.814875   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:37.896136   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:37.901507   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:38.026530   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:38.139795   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:38.312746   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:38.333938   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:38.399613   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:38.695676   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:38.814104   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:38.832273   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:38.898719   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:39.140170   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:39.312339   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:39.333614   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:39.398866   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:39.639221   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:39.811737   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:39.833053   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:39.898420   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:40.139445   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:40.312107   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:40.333046   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:40.398126   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:40.526363   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:40.639459   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:40.812263   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:40.836319   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:40.899118   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:41.196421   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:41.312278   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:41.397946   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:41.401366   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:41.693640   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:41.811844   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:41.892245   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:41.898582   16147 kapi.go:107] duration metric: took 49.078885299s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 00:55:42.209535   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:42.311691   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:42.332030   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:42.640208   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:42.811036   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:42.832140   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:43.026261   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:43.139792   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:43.312234   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:43.333403   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:43.639532   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:43.811947   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:43.833135   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:44.140088   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:44.312458   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:44.332946   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:44.639679   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:44.812194   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:44.833236   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:45.026882   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:45.139761   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:45.311822   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:45.332337   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:45.638752   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:45.814027   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:45.832589   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:46.138311   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:46.312013   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:46.332384   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:46.639286   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:46.811164   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:46.832647   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:47.027940   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:47.138238   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:47.311799   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:47.332469   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:47.640018   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:47.811884   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:47.832087   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:48.139020   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:48.311059   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:48.333488   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:48.697545   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:48.812367   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:48.894553   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:49.103090   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:49.196341   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:49.312356   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:49.394180   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:49.697580   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:49.811956   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:49.832317   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:50.139406   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:50.311601   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:50.332894   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:50.694999   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:50.812381   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:50.832988   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:51.138557   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:51.312235   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:51.333438   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:51.526115   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:51.639403   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:51.811471   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:51.833411   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:52.139833   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:52.311461   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:52.332578   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:52.640635   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:52.812104   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:52.833154   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:53.139289   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:53.311193   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:53.332353   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:53.639950   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:53.811084   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:53.834240   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:54.026835   16147 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:54.026866   16147 pod_ready.go:81] duration metric: took 25.01790697s waiting for pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:54.026894   16147 pod_ready.go:38] duration metric: took 32.725606408s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 00:55:54.026914   16147 api_server.go:52] waiting for apiserver process to appear ...
	I1026 00:55:54.026942   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:55:54.027015   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:55:54.064025   16147 cri.go:89] found id: "4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:55:54.064045   16147 cri.go:89] found id: ""
	I1026 00:55:54.064055   16147 logs.go:284] 1 containers: [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3]
	I1026 00:55:54.064118   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.067239   16147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:55:54.067289   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:55:54.128528   16147 cri.go:89] found id: "9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:55:54.128547   16147 cri.go:89] found id: ""
	I1026 00:55:54.128554   16147 logs.go:284] 1 containers: [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96]
	I1026 00:55:54.128592   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.131837   16147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:55:54.131900   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:55:54.138605   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:54.211736   16147 cri.go:89] found id: "ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:55:54.211755   16147 cri.go:89] found id: ""
	I1026 00:55:54.211762   16147 logs.go:284] 1 containers: [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8]
	I1026 00:55:54.211804   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.215051   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:55:54.215107   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:55:54.250830   16147 cri.go:89] found id: "943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:55:54.250854   16147 cri.go:89] found id: ""
	I1026 00:55:54.250863   16147 logs.go:284] 1 containers: [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4]
	I1026 00:55:54.250904   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.293314   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:55:54.293385   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:55:54.312075   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:54.328347   16147 cri.go:89] found id: "7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:55:54.328371   16147 cri.go:89] found id: ""
	I1026 00:55:54.328380   16147 logs.go:284] 1 containers: [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7]
	I1026 00:55:54.328435   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.332397   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:55:54.332467   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 00:55:54.332594   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:54.404905   16147 cri.go:89] found id: "ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5"
	I1026 00:55:54.404935   16147 cri.go:89] found id: ""
	I1026 00:55:54.404944   16147 logs.go:284] 1 containers: [ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5]
	I1026 00:55:54.405005   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.409085   16147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:55:54.409136   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:55:54.444168   16147 cri.go:89] found id: "77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:55:54.444193   16147 cri.go:89] found id: ""
	I1026 00:55:54.444202   16147 logs.go:284] 1 containers: [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870]
	I1026 00:55:54.444249   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.447480   16147 logs.go:123] Gathering logs for kubelet ...
	I1026 00:55:54.447514   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:55:54.572073   16147 logs.go:123] Gathering logs for dmesg ...
	I1026 00:55:54.572107   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:55:54.583713   16147 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:55:54.583743   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:55:54.638288   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:54.722191   16147 logs.go:123] Gathering logs for kube-proxy [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7] ...
	I1026 00:55:54.722221   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:55:54.756120   16147 logs.go:123] Gathering logs for kube-controller-manager [ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5] ...
	I1026 00:55:54.756148   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5"
	I1026 00:55:54.811193   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:54.824875   16147 logs.go:123] Gathering logs for kindnet [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870] ...
	I1026 00:55:54.824915   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:55:54.833208   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:54.860545   16147 logs.go:123] Gathering logs for container status ...
	I1026 00:55:54.860579   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:55:54.901261   16147 logs.go:123] Gathering logs for kube-apiserver [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3] ...
	I1026 00:55:54.901287   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:55:54.947374   16147 logs.go:123] Gathering logs for etcd [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96] ...
	I1026 00:55:54.947412   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:55:54.989204   16147 logs.go:123] Gathering logs for coredns [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8] ...
	I1026 00:55:54.989244   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:55:55.034192   16147 logs.go:123] Gathering logs for kube-scheduler [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4] ...
	I1026 00:55:55.034239   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:55:55.125686   16147 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:55:55.125730   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:55:55.195764   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:55.313036   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:55.395671   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:55.695041   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:55.813228   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:55.895684   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:56.195417   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:56.311724   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:56.392309   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:56.694744   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:56.812450   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:56.833551   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:57.139220   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:57.312177   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:57.332687   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:57.640436   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:57.815178   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:57.832755   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:57.860222   16147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 00:55:57.902714   16147 api_server.go:72] duration metric: took 1m11.407706592s to wait for apiserver process to appear ...
	I1026 00:55:57.902741   16147 api_server.go:88] waiting for apiserver healthz status ...
	I1026 00:55:57.902773   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:55:57.902816   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:55:57.937651   16147 cri.go:89] found id: "4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:55:57.937697   16147 cri.go:89] found id: ""
	I1026 00:55:57.937710   16147 logs.go:284] 1 containers: [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3]
	I1026 00:55:57.937763   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:57.941505   16147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:55:57.941566   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:55:58.011525   16147 cri.go:89] found id: "9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:55:58.011553   16147 cri.go:89] found id: ""
	I1026 00:55:58.011563   16147 logs.go:284] 1 containers: [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96]
	I1026 00:55:58.011623   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.015378   16147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:55:58.015503   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:55:58.098159   16147 cri.go:89] found id: "ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:55:58.098180   16147 cri.go:89] found id: ""
	I1026 00:55:58.098189   16147 logs.go:284] 1 containers: [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8]
	I1026 00:55:58.098254   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.101548   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:55:58.101609   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:55:58.139454   16147 cri.go:89] found id: "943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:55:58.139477   16147 cri.go:89] found id: ""
	I1026 00:55:58.139487   16147 logs.go:284] 1 containers: [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4]
	I1026 00:55:58.139536   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.140054   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:58.142907   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:55:58.142964   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:55:58.211366   16147 cri.go:89] found id: "7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:55:58.211392   16147 cri.go:89] found id: ""
	I1026 00:55:58.211402   16147 logs.go:284] 1 containers: [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7]
	I1026 00:55:58.211455   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.214863   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:55:58.214935   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 00:55:58.250938   16147 cri.go:89] found id: "ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5"
	I1026 00:55:58.250955   16147 cri.go:89] found id: ""
	I1026 00:55:58.250962   16147 logs.go:284] 1 containers: [ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5]
	I1026 00:55:58.251001   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.290667   16147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:55:58.290733   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:55:58.312987   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:58.326956   16147 cri.go:89] found id: "77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:55:58.326980   16147 cri.go:89] found id: ""
	I1026 00:55:58.326989   16147 logs.go:284] 1 containers: [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870]
	I1026 00:55:58.327043   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.330940   16147 logs.go:123] Gathering logs for kube-controller-manager [ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5] ...
	I1026 00:55:58.330980   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5"
	I1026 00:55:58.333777   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:58.426597   16147 logs.go:123] Gathering logs for kindnet [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870] ...
	I1026 00:55:58.426636   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:55:58.463192   16147 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:55:58.463224   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:55:58.565297   16147 logs.go:123] Gathering logs for kubelet ...
	I1026 00:55:58.565327   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:55:58.640955   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:58.651879   16147 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:55:58.651911   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:55:58.807375   16147 logs.go:123] Gathering logs for etcd [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96] ...
	I1026 00:55:58.807421   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:55:58.811551   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:58.833355   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:58.853156   16147 logs.go:123] Gathering logs for kube-scheduler [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4] ...
	I1026 00:55:58.853190   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:55:58.926488   16147 logs.go:123] Gathering logs for kube-proxy [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7] ...
	I1026 00:55:58.926523   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:55:58.996170   16147 logs.go:123] Gathering logs for container status ...
	I1026 00:55:58.996201   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:55:59.038889   16147 logs.go:123] Gathering logs for dmesg ...
	I1026 00:55:59.038915   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:55:59.050977   16147 logs.go:123] Gathering logs for kube-apiserver [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3] ...
	I1026 00:55:59.051006   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:55:59.117287   16147 logs.go:123] Gathering logs for coredns [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8] ...
	I1026 00:55:59.117320   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:55:59.139511   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:59.311992   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:59.333106   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:59.640027   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:59.812170   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:59.833185   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:56:00.140032   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:00.312165   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:00.332833   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:56:00.639622   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:00.811449   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:00.833157   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:56:01.209946   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:01.311991   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:01.332602   16147 kapi.go:107] duration metric: took 1m8.515908463s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 00:56:01.639357   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:01.654742   16147 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 00:56:01.661289   16147 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 00:56:01.662530   16147 api_server.go:141] control plane version: v1.28.3
	I1026 00:56:01.662556   16147 api_server.go:131] duration metric: took 3.759808008s to wait for apiserver health ...
	I1026 00:56:01.662570   16147 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 00:56:01.662599   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:56:01.662656   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:56:01.708420   16147 cri.go:89] found id: "4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:56:01.708443   16147 cri.go:89] found id: ""
	I1026 00:56:01.708452   16147 logs.go:284] 1 containers: [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3]
	I1026 00:56:01.708505   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:01.711892   16147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:56:01.711956   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:56:01.748123   16147 cri.go:89] found id: "9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:56:01.748155   16147 cri.go:89] found id: ""
	I1026 00:56:01.748163   16147 logs.go:284] 1 containers: [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96]
	I1026 00:56:01.748217   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:01.792462   16147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:56:01.792532   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:56:01.811786   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:01.829183   16147 cri.go:89] found id: "ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:56:01.829207   16147 cri.go:89] found id: ""
	I1026 00:56:01.829218   16147 logs.go:284] 1 containers: [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8]
	I1026 00:56:01.829271   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:01.833261   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:56:01.833327   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:56:01.904436   16147 cri.go:89] found id: "943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:56:01.904461   16147 cri.go:89] found id: ""
	I1026 00:56:01.904470   16147 logs.go:284] 1 containers: [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4]
	I1026 00:56:01.904525   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:01.907731   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:56:01.907796   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:56:01.992517   16147 cri.go:89] found id: "7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:56:01.992541   16147 cri.go:89] found id: ""
	I1026 00:56:01.992551   16147 logs.go:284] 1 containers: [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7]
	I1026 00:56:01.992605   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:01.996423   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:56:01.996477   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	E1026 00:56:02.038275   16147 logs.go:281] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-10-26T00:56:02Z" level=fatal msg="unable to determine image API version: rpc error: code = Unknown desc = lstat /var/lib/containers/storage/overlay-images/.tmp-images.json582843453: no such file or directory"
	I1026 00:56:02.038304   16147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:56:02.038359   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:56:02.122940   16147 cri.go:89] found id: "77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:56:02.122964   16147 cri.go:89] found id: ""
	I1026 00:56:02.122974   16147 logs.go:284] 1 containers: [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870]
	I1026 00:56:02.123027   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:02.126560   16147 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:56:02.126593   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:56:02.140334   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:02.314271   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:02.411486   16147 logs.go:123] Gathering logs for etcd [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96] ...
	I1026 00:56:02.411521   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:56:02.500977   16147 logs.go:123] Gathering logs for kube-scheduler [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4] ...
	I1026 00:56:02.501011   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:56:02.542289   16147 logs.go:123] Gathering logs for kube-proxy [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7] ...
	I1026 00:56:02.542317   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:56:02.575852   16147 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:56:02.575880   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:56:02.639285   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:02.659701   16147 logs.go:123] Gathering logs for container status ...
	I1026 00:56:02.659745   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:56:02.714653   16147 logs.go:123] Gathering logs for kubelet ...
	I1026 00:56:02.714680   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:56:02.785958   16147 logs.go:123] Gathering logs for dmesg ...
	I1026 00:56:02.786000   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:56:02.798309   16147 logs.go:123] Gathering logs for kube-apiserver [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3] ...
	I1026 00:56:02.798343   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:56:02.811965   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:02.913553   16147 logs.go:123] Gathering logs for coredns [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8] ...
	I1026 00:56:02.913624   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:56:03.019201   16147 logs.go:123] Gathering logs for kindnet [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870] ...
	I1026 00:56:03.019240   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:56:03.193392   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:03.311641   16147 kapi.go:107] duration metric: took 1m7.508753908s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 00:56:03.316986   16147 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-211632 cluster.
	I1026 00:56:03.319331   16147 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 00:56:03.321260   16147 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 00:56:03.639621   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:04.139892   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:04.639659   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:05.138490   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:05.647164   16147 system_pods.go:59] 19 kube-system pods found
	I1026 00:56:05.647273   16147 system_pods.go:61] "coredns-5dd5756b68-htzfl" [adda9bac-99f3-459c-a0d8-f314baef0ed1] Running
	I1026 00:56:05.647292   16147 system_pods.go:61] "csi-hostpath-attacher-0" [45ea8f81-1da5-4588-bf4a-2dd212359911] Running
	I1026 00:56:05.647326   16147 system_pods.go:61] "csi-hostpath-resizer-0" [cdc8bcef-d920-49e4-9263-b0c88c263c1a] Running
	I1026 00:56:05.647350   16147 system_pods.go:61] "csi-hostpathplugin-n8dsf" [e510ef5d-092d-4719-9579-047d99e0edb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 00:56:05.647367   16147 system_pods.go:61] "etcd-addons-211632" [08d2e2c9-7aba-4242-a61f-a0c94793f8bf] Running
	I1026 00:56:05.647383   16147 system_pods.go:61] "kindnet-x4r64" [59f20b0c-bba3-4aac-92c6-4f77be16eaf6] Running
	I1026 00:56:05.647414   16147 system_pods.go:61] "kube-apiserver-addons-211632" [832867a3-744f-44d5-8c10-af03b46048b9] Running
	I1026 00:56:05.647434   16147 system_pods.go:61] "kube-controller-manager-addons-211632" [d0b067ad-3b88-4b34-beec-e17e01d2956b] Running
	I1026 00:56:05.647450   16147 system_pods.go:61] "kube-ingress-dns-minikube" [aa4700ad-2b9b-40e1-91ea-7472194766c1] Running
	I1026 00:56:05.647468   16147 system_pods.go:61] "kube-proxy-5xv7d" [e5b7e0ed-0535-4795-9c45-22032cba4c2f] Running
	I1026 00:56:05.647499   16147 system_pods.go:61] "kube-scheduler-addons-211632" [80da97c4-dc62-4595-b830-ad23f164c0e2] Running
	I1026 00:56:05.647518   16147 system_pods.go:61] "metrics-server-7c66d45ddc-8pc98" [40138e51-703f-4aa0-b5ec-5392438b711d] Running
	I1026 00:56:05.647533   16147 system_pods.go:61] "nvidia-device-plugin-daemonset-bbnbx" [64d4d05e-0610-4bb6-a7cc-53da0eb05823] Running
	I1026 00:56:05.647547   16147 system_pods.go:61] "registry-proxy-q4wbt" [77c9316b-3c51-4ba8-8001-81a3132d7651] Running
	I1026 00:56:05.647561   16147 system_pods.go:61] "registry-svllb" [6462cf6d-b638-4950-bc58-6d40cfa1a9e9] Running
	I1026 00:56:05.647597   16147 system_pods.go:61] "snapshot-controller-58dbcc7b99-5jf5l" [1a8b4529-2794-410c-b66f-93a91079cc01] Running
	I1026 00:56:05.647612   16147 system_pods.go:61] "snapshot-controller-58dbcc7b99-jz6r4" [49c869b6-da41-4714-a8c2-69ed29cde96a] Running
	I1026 00:56:05.647626   16147 system_pods.go:61] "storage-provisioner" [cf750322-b255-47b0-98e6-02a90c8c805c] Running
	I1026 00:56:05.647640   16147 system_pods.go:61] "tiller-deploy-7b677967b9-gth4w" [d29c4ef2-76c0-4d9a-bf0f-ff117c9b1924] Running
	I1026 00:56:05.647669   16147 system_pods.go:74] duration metric: took 3.985090507s to wait for pod list to return data ...
	I1026 00:56:05.647692   16147 default_sa.go:34] waiting for default service account to be created ...
	I1026 00:56:05.650062   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:05.691477   16147 default_sa.go:45] found service account: "default"
	I1026 00:56:05.691561   16147 default_sa.go:55] duration metric: took 43.854191ms for default service account to be created ...
	I1026 00:56:05.691584   16147 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 00:56:05.702232   16147 system_pods.go:86] 19 kube-system pods found
	I1026 00:56:05.702257   16147 system_pods.go:89] "coredns-5dd5756b68-htzfl" [adda9bac-99f3-459c-a0d8-f314baef0ed1] Running
	I1026 00:56:05.702265   16147 system_pods.go:89] "csi-hostpath-attacher-0" [45ea8f81-1da5-4588-bf4a-2dd212359911] Running
	I1026 00:56:05.702271   16147 system_pods.go:89] "csi-hostpath-resizer-0" [cdc8bcef-d920-49e4-9263-b0c88c263c1a] Running
	I1026 00:56:05.702282   16147 system_pods.go:89] "csi-hostpathplugin-n8dsf" [e510ef5d-092d-4719-9579-047d99e0edb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 00:56:05.702290   16147 system_pods.go:89] "etcd-addons-211632" [08d2e2c9-7aba-4242-a61f-a0c94793f8bf] Running
	I1026 00:56:05.702297   16147 system_pods.go:89] "kindnet-x4r64" [59f20b0c-bba3-4aac-92c6-4f77be16eaf6] Running
	I1026 00:56:05.702303   16147 system_pods.go:89] "kube-apiserver-addons-211632" [832867a3-744f-44d5-8c10-af03b46048b9] Running
	I1026 00:56:05.702310   16147 system_pods.go:89] "kube-controller-manager-addons-211632" [d0b067ad-3b88-4b34-beec-e17e01d2956b] Running
	I1026 00:56:05.702317   16147 system_pods.go:89] "kube-ingress-dns-minikube" [aa4700ad-2b9b-40e1-91ea-7472194766c1] Running
	I1026 00:56:05.702323   16147 system_pods.go:89] "kube-proxy-5xv7d" [e5b7e0ed-0535-4795-9c45-22032cba4c2f] Running
	I1026 00:56:05.702335   16147 system_pods.go:89] "kube-scheduler-addons-211632" [80da97c4-dc62-4595-b830-ad23f164c0e2] Running
	I1026 00:56:05.702343   16147 system_pods.go:89] "metrics-server-7c66d45ddc-8pc98" [40138e51-703f-4aa0-b5ec-5392438b711d] Running
	I1026 00:56:05.702351   16147 system_pods.go:89] "nvidia-device-plugin-daemonset-bbnbx" [64d4d05e-0610-4bb6-a7cc-53da0eb05823] Running
	I1026 00:56:05.702357   16147 system_pods.go:89] "registry-proxy-q4wbt" [77c9316b-3c51-4ba8-8001-81a3132d7651] Running
	I1026 00:56:05.702362   16147 system_pods.go:89] "registry-svllb" [6462cf6d-b638-4950-bc58-6d40cfa1a9e9] Running
	I1026 00:56:05.702368   16147 system_pods.go:89] "snapshot-controller-58dbcc7b99-5jf5l" [1a8b4529-2794-410c-b66f-93a91079cc01] Running
	I1026 00:56:05.702374   16147 system_pods.go:89] "snapshot-controller-58dbcc7b99-jz6r4" [49c869b6-da41-4714-a8c2-69ed29cde96a] Running
	I1026 00:56:05.702379   16147 system_pods.go:89] "storage-provisioner" [cf750322-b255-47b0-98e6-02a90c8c805c] Running
	I1026 00:56:05.702385   16147 system_pods.go:89] "tiller-deploy-7b677967b9-gth4w" [d29c4ef2-76c0-4d9a-bf0f-ff117c9b1924] Running
	I1026 00:56:05.702393   16147 system_pods.go:126] duration metric: took 10.79503ms to wait for k8s-apps to be running ...
	I1026 00:56:05.702402   16147 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 00:56:05.702452   16147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 00:56:05.714764   16147 system_svc.go:56] duration metric: took 12.353716ms WaitForService to wait for kubelet.
	I1026 00:56:05.714793   16147 kubeadm.go:581] duration metric: took 1m19.219794144s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1026 00:56:05.714811   16147 node_conditions.go:102] verifying NodePressure condition ...
	I1026 00:56:05.717160   16147 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 00:56:05.717185   16147 node_conditions.go:123] node cpu capacity is 8
	I1026 00:56:05.717197   16147 node_conditions.go:105] duration metric: took 2.381657ms to run NodePressure ...
	I1026 00:56:05.717208   16147 start.go:228] waiting for startup goroutines ...
	I1026 00:56:06.138651   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:06.640521   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:07.139568   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:07.639262   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:08.138769   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:08.638739   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:09.139303   16147 kapi.go:107] duration metric: took 1m15.513926908s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 00:56:09.141514   16147 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, helm-tiller, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1026 00:56:09.143385   16147 addons.go:502] enable addons completed in 1m22.743503762s: enabled=[nvidia-device-plugin ingress-dns storage-provisioner helm-tiller inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1026 00:56:09.143427   16147 start.go:233] waiting for cluster config update ...
	I1026 00:56:09.143442   16147 start.go:242] writing updated cluster config ...
	I1026 00:56:09.143703   16147 ssh_runner.go:195] Run: rm -f paused
	I1026 00:56:09.192110   16147 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1026 00:56:09.194268   16147 out.go:177] * Done! kubectl is now configured to use "addons-211632" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 26 00:58:52 addons-211632 crio[949]: time="2023-10-26 00:58:52.345330157Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6" id=f99af02a-ce8a-4431-988a-df56b6d3dcff name=/runtime.v1.ImageService/PullImage
	Oct 26 00:58:52 addons-211632 crio[949]: time="2023-10-26 00:58:52.346240653Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=ee59201b-5718-4bce-b3bf-dcb8eaf523c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 00:58:52 addons-211632 crio[949]: time="2023-10-26 00:58:52.347158179Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=ee59201b-5718-4bce-b3bf-dcb8eaf523c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 00:58:52 addons-211632 crio[949]: time="2023-10-26 00:58:52.348003639Z" level=info msg="Creating container: default/hello-world-app-5d77478584-n946z/hello-world-app" id=fe155ad4-3e3e-4f55-b822-367209b702ff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 00:58:52 addons-211632 crio[949]: time="2023-10-26 00:58:52.348097989Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 26 00:58:52 addons-211632 crio[949]: time="2023-10-26 00:58:52.421518586Z" level=info msg="Removing container: ebd24d2a6abd5fc6593e66e22ac631cada7e80898c212748f82ec569dfbc8685" id=539a3d35-8316-46b3-b0b5-58fc4d51b370 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 00:58:52 addons-211632 crio[949]: time="2023-10-26 00:58:52.424183310Z" level=info msg="Created container 0531caecb98e2a5ce7dfdfb717f3b5011ca30061dbfcdfd7afc9909edab20f54: default/hello-world-app-5d77478584-n946z/hello-world-app" id=fe155ad4-3e3e-4f55-b822-367209b702ff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 00:58:52 addons-211632 crio[949]: time="2023-10-26 00:58:52.424656370Z" level=info msg="Starting container: 0531caecb98e2a5ce7dfdfb717f3b5011ca30061dbfcdfd7afc9909edab20f54" id=4ebc5e7c-2778-4be6-a149-4d60182a1c88 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 00:58:52 addons-211632 crio[949]: time="2023-10-26 00:58:52.436211741Z" level=info msg="Started container" PID=12585 containerID=0531caecb98e2a5ce7dfdfb717f3b5011ca30061dbfcdfd7afc9909edab20f54 description=default/hello-world-app-5d77478584-n946z/hello-world-app id=4ebc5e7c-2778-4be6-a149-4d60182a1c88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3397c37301425974acd92d2d8ac2ae661757d1c06b0b207917db9b99402abcba
	Oct 26 00:58:52 addons-211632 crio[949]: time="2023-10-26 00:58:52.441292413Z" level=info msg="Removed container ebd24d2a6abd5fc6593e66e22ac631cada7e80898c212748f82ec569dfbc8685: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=539a3d35-8316-46b3-b0b5-58fc4d51b370 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 00:58:54 addons-211632 crio[949]: time="2023-10-26 00:58:54.022714715Z" level=info msg="Stopping container: c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92 (timeout: 2s)" id=7aa7bebf-212c-4911-8df2-c4ea52a154f2 name=/runtime.v1.RuntimeService/StopContainer
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.032382976Z" level=warning msg="Stopping container c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=7aa7bebf-212c-4911-8df2-c4ea52a154f2 name=/runtime.v1.RuntimeService/StopContainer
	Oct 26 00:58:56 addons-211632 conmon[6257]: conmon c6bc9dc3e219a4af2abc <ninfo>: container 6269 exited with status 137
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.179262813Z" level=info msg="Stopped container c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92: ingress-nginx/ingress-nginx-controller-6f48fc54bd-cmvhx/controller" id=7aa7bebf-212c-4911-8df2-c4ea52a154f2 name=/runtime.v1.RuntimeService/StopContainer
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.179768253Z" level=info msg="Stopping pod sandbox: d9bc4d9e7295f38d631ff98ae60cb4b9a055f4e6c390615fd1bcc3d21c86c3fc" id=91ed1411-e295-409c-8b73-aaa2fcf95bf4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.182734166Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-5QA2YLLIJGVJUBWC - [0:0]\n:KUBE-HP-EC3UZVDT7GYCHS2O - [0:0]\n-X KUBE-HP-EC3UZVDT7GYCHS2O\n-X KUBE-HP-5QA2YLLIJGVJUBWC\nCOMMIT\n"
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.184038179Z" level=info msg="Closing host port tcp:80"
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.184076740Z" level=info msg="Closing host port tcp:443"
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.185293609Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.185317051Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.185461029Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-6f48fc54bd-cmvhx Namespace:ingress-nginx ID:d9bc4d9e7295f38d631ff98ae60cb4b9a055f4e6c390615fd1bcc3d21c86c3fc UID:adde4baa-3237-45cc-b962-7a85220e5af7 NetNS:/var/run/netns/ce6af035-5e2f-42fa-9595-bd9a63b490ba Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.185582149Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-6f48fc54bd-cmvhx from CNI network \"kindnet\" (type=ptp)"
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.215180076Z" level=info msg="Stopped pod sandbox: d9bc4d9e7295f38d631ff98ae60cb4b9a055f4e6c390615fd1bcc3d21c86c3fc" id=91ed1411-e295-409c-8b73-aaa2fcf95bf4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.434007585Z" level=info msg="Removing container: c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92" id=7bb9ab51-f09b-4ee2-9cbb-1d8959578636 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 00:58:56 addons-211632 crio[949]: time="2023-10-26 00:58:56.448718491Z" level=info msg="Removed container c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92: ingress-nginx/ingress-nginx-controller-6f48fc54bd-cmvhx/controller" id=7bb9ab51-f09b-4ee2-9cbb-1d8959578636 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0531caecb98e2       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6                      8 seconds ago       Running             hello-world-app           0                   3397c37301425       hello-world-app-5d77478584-n946z
	4e5a2f7035c59       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                              2 minutes ago       Running             nginx                     0                   cddc7abfc7bdd       nginx
	ff89691910760       ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4                        2 minutes ago       Running             headlamp                  0                   53c88b32f3e8f       headlamp-94b766c-l89p5
	aa2cbceabb7ce       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   36a1c0cdb3e72       gcp-auth-d4c87556c-kdp8b
	744d736896d35       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   0d22131f3fa7b       ingress-nginx-admission-patch-b2ksj
	d578e99c3ac89       gcr.io/cloud-spanner-emulator/emulator@sha256:07e8839e7fa1851ac9113295bc6534ead5c151f68bf7d47bd7e00af0c5948931               3 minutes ago       Running             cloud-spanner-emulator    0                   9b0dbfa67621e       cloud-spanner-emulator-56665cdfc-qtjfd
	0652da13c33d3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   41a359a517a91       ingress-nginx-admission-create-j8kjk
	ec3c9ea204187       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   6c07f9edd294e       coredns-5dd5756b68-htzfl
	f8417ef628e7e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   adb4d2166d394       storage-provisioner
	dd7ffca93132d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21            4 minutes ago       Running             gadget                    0                   de47483b8f1d6       gadget-mlbw5
	7b07cef64940d       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                             4 minutes ago       Running             kube-proxy                0                   de6ea2d92f16d       kube-proxy-5xv7d
	77f5574833c79       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   bad9c2696b3f0       kindnet-x4r64
	943e6b682f7fd       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                             4 minutes ago       Running             kube-scheduler            0                   1ca61355151c1       kube-scheduler-addons-211632
	ce5348c34feb4       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                             4 minutes ago       Running             kube-controller-manager   0                   ed38d93cd2c4c       kube-controller-manager-addons-211632
	9dd08b1c15310       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   79a44e2124ad2       etcd-addons-211632
	4a7726545672e       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                             4 minutes ago       Running             kube-apiserver            0                   d1f16b8217f9d       kube-apiserver-addons-211632
	
	* 
	* ==> coredns [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8] <==
	* [INFO] 10.244.0.13:42205 - 63673 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085703s
	[INFO] 10.244.0.13:54583 - 22076 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004629801s
	[INFO] 10.244.0.13:54583 - 32314 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.006493897s
	[INFO] 10.244.0.13:50787 - 2257 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004906927s
	[INFO] 10.244.0.13:50787 - 44758 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007034555s
	[INFO] 10.244.0.13:43236 - 19161 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005233139s
	[INFO] 10.244.0.13:43236 - 14052 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006922037s
	[INFO] 10.244.0.13:37775 - 51588 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113565s
	[INFO] 10.244.0.13:37775 - 42904 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137415s
	[INFO] 10.244.0.20:58541 - 1726 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000156687s
	[INFO] 10.244.0.20:38933 - 50255 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000158372s
	[INFO] 10.244.0.20:37661 - 19246 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096492s
	[INFO] 10.244.0.20:48536 - 52729 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000055415s
	[INFO] 10.244.0.20:58069 - 35118 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000144905s
	[INFO] 10.244.0.20:39768 - 47833 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00019479s
	[INFO] 10.244.0.20:46518 - 59515 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008099971s
	[INFO] 10.244.0.20:56334 - 31948 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008617315s
	[INFO] 10.244.0.20:48489 - 49030 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007890011s
	[INFO] 10.244.0.20:33050 - 19858 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008717875s
	[INFO] 10.244.0.20:54689 - 65002 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00574047s
	[INFO] 10.244.0.20:37119 - 49159 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006584873s
	[INFO] 10.244.0.20:37407 - 28665 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000780807s
	[INFO] 10.244.0.20:41293 - 34935 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001044211s
	[INFO] 10.244.0.25:60003 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000188839s
	[INFO] 10.244.0.25:58065 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000206742s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-211632
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-211632
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af1d352f1030f8f3ea7f97e311e7fe82ef319942
	                    minikube.k8s.io/name=addons-211632
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_26T00_54_34_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-211632
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 00:54:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-211632
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 00:58:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 00:57:06 +0000   Thu, 26 Oct 2023 00:54:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 00:57:06 +0000   Thu, 26 Oct 2023 00:54:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 00:57:06 +0000   Thu, 26 Oct 2023 00:54:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 00:57:06 +0000   Thu, 26 Oct 2023 00:55:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-211632
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8e920dba5e44f52a984b4c201bc4d03
	  System UUID:                526ff319-72ea-4404-bea3-b50b59b7015d
	  Boot ID:                    37a42525-bdda-4c41-ac15-6bc286a851a0
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-56665cdfc-qtjfd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  default                     hello-world-app-5d77478584-n946z          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gadget                      gadget-mlbw5                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  gcp-auth                    gcp-auth-d4c87556c-kdp8b                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  headlamp                    headlamp-94b766c-l89p5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-5dd5756b68-htzfl                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m15s
	  kube-system                 etcd-addons-211632                        100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m28s
	  kube-system                 kindnet-x4r64                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m15s
	  kube-system                 kube-apiserver-addons-211632              250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-controller-manager-addons-211632     200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-proxy-5xv7d                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-scheduler-addons-211632              100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m34s (x8 over 4m34s)  kubelet          Node addons-211632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s (x8 over 4m34s)  kubelet          Node addons-211632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s (x8 over 4m34s)  kubelet          Node addons-211632 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m28s                  kubelet          Node addons-211632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s                  kubelet          Node addons-211632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s                  kubelet          Node addons-211632 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m16s                  node-controller  Node addons-211632 event: Registered Node addons-211632 in Controller
	  Normal  NodeReady                3m41s                  kubelet          Node addons-211632 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.015363] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.007799] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001950] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001726] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001494] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.010141] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.002130] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001430] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001775] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001280] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +4.560947] kauditd_printk_skb: 32 callbacks suppressed
	[Oct26 00:56] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 22 6f 38 7d 34 7e a6 37 b2 87 81 f5 08 00
	[  +1.011661] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 22 6f 38 7d 34 7e a6 37 b2 87 81 f5 08 00
	[  +2.015810] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 22 6f 38 7d 34 7e a6 37 b2 87 81 f5 08 00
	[  +4.063605] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 6f 38 7d 34 7e a6 37 b2 87 81 f5 08 00
	[  +8.191223] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 22 6f 38 7d 34 7e a6 37 b2 87 81 f5 08 00
	[Oct26 00:57] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 22 6f 38 7d 34 7e a6 37 b2 87 81 f5 08 00
	[ +33.532759] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 22 6f 38 7d 34 7e a6 37 b2 87 81 f5 08 00
	
	* 
	* ==> etcd [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96] <==
	* {"level":"info","ts":"2023-10-26T00:54:49.294569Z","caller":"traceutil/trace.go:171","msg":"trace[891182157] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"191.240282ms","start":"2023-10-26T00:54:49.10331Z","end":"2023-10-26T00:54:49.29455Z","steps":["trace[891182157] 'process raft request'  (duration: 86.613534ms)","trace[891182157] 'compare'  (duration: 102.642788ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-26T00:54:49.613293Z","caller":"traceutil/trace.go:171","msg":"trace[1254569783] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"207.15739ms","start":"2023-10-26T00:54:49.406114Z","end":"2023-10-26T00:54:49.613271Z","steps":["trace[1254569783] 'process raft request'  (duration: 186.449248ms)","trace[1254569783] 'compare'  (duration: 20.281799ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-26T00:54:50.294044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.383136ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128024712627227585 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:421 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3174 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-26T00:54:50.302429Z","caller":"traceutil/trace.go:171","msg":"trace[1076541632] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"295.158107ms","start":"2023-10-26T00:54:50.007243Z","end":"2023-10-26T00:54:50.302401Z","steps":["trace[1076541632] 'process raft request'  (duration: 93.334204ms)","trace[1076541632] 'compare'  (duration: 193.05003ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-26T00:54:50.308648Z","caller":"traceutil/trace.go:171","msg":"trace[176108853] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"200.295573ms","start":"2023-10-26T00:54:50.1083Z","end":"2023-10-26T00:54:50.308596Z","steps":["trace[176108853] 'process raft request'  (duration: 186.216498ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:54:50.30882Z","caller":"traceutil/trace.go:171","msg":"trace[1239667156] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"300.138595ms","start":"2023-10-26T00:54:50.008658Z","end":"2023-10-26T00:54:50.308797Z","steps":["trace[1239667156] 'process raft request'  (duration: 285.800305ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-26T00:54:50.39364Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-26T00:54:50.008641Z","time spent":"381.419084ms","remote":"127.0.0.1:45066","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":605,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/kindnet\" mod_revision:284 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/kindnet\" value_size:552 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/kindnet\" > >"}
	{"level":"warn","ts":"2023-10-26T00:54:51.410206Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.508205ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-211632\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-10-26T00:54:51.410278Z","caller":"traceutil/trace.go:171","msg":"trace[1190423287] range","detail":"{range_begin:/registry/minions/addons-211632; range_end:; response_count:1; response_revision:496; }","duration":"102.5983ms","start":"2023-10-26T00:54:51.307666Z","end":"2023-10-26T00:54:51.410265Z","steps":["trace[1190423287] 'range keys from in-memory index tree'  (duration: 102.427492ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:54:51.410669Z","caller":"traceutil/trace.go:171","msg":"trace[862519116] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"102.864159ms","start":"2023-10-26T00:54:51.307795Z","end":"2023-10-26T00:54:51.410659Z","steps":["trace[862519116] 'compare'  (duration: 97.661345ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:54:51.493793Z","caller":"traceutil/trace.go:171","msg":"trace[803885448] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"103.550887ms","start":"2023-10-26T00:54:51.390225Z","end":"2023-10-26T00:54:51.493775Z","steps":["trace[803885448] 'process raft request'  (duration: 103.044817ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:54:51.494043Z","caller":"traceutil/trace.go:171","msg":"trace[410789711] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"103.671161ms","start":"2023-10-26T00:54:51.390354Z","end":"2023-10-26T00:54:51.494025Z","steps":["trace[410789711] 'process raft request'  (duration: 102.996423ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:54:51.494663Z","caller":"traceutil/trace.go:171","msg":"trace[2067660786] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"104.186071ms","start":"2023-10-26T00:54:51.390463Z","end":"2023-10-26T00:54:51.494649Z","steps":["trace[2067660786] 'process raft request'  (duration: 102.928346ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-26T00:54:51.49467Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.574299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2023-10-26T00:54:51.494996Z","caller":"traceutil/trace.go:171","msg":"trace[1255900101] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:505; }","duration":"104.910171ms","start":"2023-10-26T00:54:51.390074Z","end":"2023-10-26T00:54:51.494984Z","steps":["trace[1255900101] 'agreement among raft nodes before linearized reading'  (duration: 104.526989ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:56:01.048109Z","caller":"traceutil/trace.go:171","msg":"trace[725678898] transaction","detail":"{read_only:false; response_revision:1103; number_of_response:1; }","duration":"133.406216ms","start":"2023-10-26T00:56:00.914687Z","end":"2023-10-26T00:56:01.048093Z","steps":["trace[725678898] 'process raft request'  (duration: 133.286711ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:56:01.205833Z","caller":"traceutil/trace.go:171","msg":"trace[1924785532] transaction","detail":"{read_only:false; response_revision:1106; number_of_response:1; }","duration":"153.4687ms","start":"2023-10-26T00:56:01.052341Z","end":"2023-10-26T00:56:01.205809Z","steps":["trace[1924785532] 'process raft request'  (duration: 90.352993ms)","trace[1924785532] 'compare'  (duration: 62.89638ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-26T00:56:01.205844Z","caller":"traceutil/trace.go:171","msg":"trace[1607229179] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"153.434448ms","start":"2023-10-26T00:56:01.052399Z","end":"2023-10-26T00:56:01.205833Z","steps":["trace[1607229179] 'process raft request'  (duration: 153.326812ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:56:40.241218Z","caller":"traceutil/trace.go:171","msg":"trace[961112176] transaction","detail":"{read_only:false; response_revision:1464; number_of_response:1; }","duration":"118.659382ms","start":"2023-10-26T00:56:40.122516Z","end":"2023-10-26T00:56:40.241175Z","steps":["trace[961112176] 'process raft request'  (duration: 118.515959ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-26T00:56:56.111971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.611832ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128024712627230950 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/hpvc-restore.179182504bc2c958\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/hpvc-restore.179182504bc2c958\" value_size:649 lease:8128024712627230604 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-10-26T00:56:56.112125Z","caller":"traceutil/trace.go:171","msg":"trace[515936108] linearizableReadLoop","detail":"{readStateIndex:1581; appliedIndex:1579; }","duration":"232.415181ms","start":"2023-10-26T00:56:55.879698Z","end":"2023-10-26T00:56:56.112113Z","steps":["trace[515936108] 'read index received'  (duration: 107.622355ms)","trace[515936108] 'applied index is now lower than readState.Index'  (duration: 124.792097ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-26T00:56:56.112221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.527975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents/snapcontent-6ba4a181-024f-4357-8b35-f585cdf3a20d\" ","response":"range_response_count:1 size:1976"}
	{"level":"info","ts":"2023-10-26T00:56:56.112338Z","caller":"traceutil/trace.go:171","msg":"trace[2084944682] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotcontents/snapcontent-6ba4a181-024f-4357-8b35-f585cdf3a20d; range_end:; response_count:1; response_revision:1522; }","duration":"232.636389ms","start":"2023-10-26T00:56:55.879676Z","end":"2023-10-26T00:56:56.112313Z","steps":["trace[2084944682] 'agreement among raft nodes before linearized reading'  (duration: 232.478838ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:56:56.112209Z","caller":"traceutil/trace.go:171","msg":"trace[71609010] transaction","detail":"{read_only:false; response_revision:1522; number_of_response:1; }","duration":"233.39448ms","start":"2023-10-26T00:56:55.878795Z","end":"2023-10-26T00:56:56.11219Z","steps":["trace[71609010] 'process raft request'  (duration: 233.257497ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:56:56.112209Z","caller":"traceutil/trace.go:171","msg":"trace[1731625499] transaction","detail":"{read_only:false; response_revision:1521; number_of_response:1; }","duration":"234.644875ms","start":"2023-10-26T00:56:55.877547Z","end":"2023-10-26T00:56:56.112192Z","steps":["trace[1731625499] 'process raft request'  (duration: 109.759753ms)","trace[1731625499] 'compare'  (duration: 124.501174ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [aa2cbceabb7ceee001867afe8f27fa7f0add28f9d24c7f21114c0ecadf512cb8] <==
	* 2023/10/26 00:56:02 GCP Auth Webhook started!
	2023/10/26 00:56:09 Ready to marshal response ...
	2023/10/26 00:56:09 Ready to write response ...
	2023/10/26 00:56:09 Ready to marshal response ...
	2023/10/26 00:56:09 Ready to write response ...
	2023/10/26 00:56:14 Ready to marshal response ...
	2023/10/26 00:56:14 Ready to write response ...
	2023/10/26 00:56:15 Ready to marshal response ...
	2023/10/26 00:56:15 Ready to write response ...
	2023/10/26 00:56:15 Ready to marshal response ...
	2023/10/26 00:56:15 Ready to write response ...
	2023/10/26 00:56:15 Ready to marshal response ...
	2023/10/26 00:56:15 Ready to write response ...
	2023/10/26 00:56:19 Ready to marshal response ...
	2023/10/26 00:56:19 Ready to write response ...
	2023/10/26 00:56:20 Ready to marshal response ...
	2023/10/26 00:56:20 Ready to write response ...
	2023/10/26 00:56:29 Ready to marshal response ...
	2023/10/26 00:56:29 Ready to write response ...
	2023/10/26 00:56:33 Ready to marshal response ...
	2023/10/26 00:56:33 Ready to write response ...
	2023/10/26 00:56:56 Ready to marshal response ...
	2023/10/26 00:56:56 Ready to write response ...
	2023/10/26 00:58:50 Ready to marshal response ...
	2023/10/26 00:58:50 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  00:59:01 up 41 min,  0 users,  load average: 0.46, 0.69, 0.36
	Linux addons-211632 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870] <==
	* I1026 00:57:00.633796       1 main.go:227] handling current node
	I1026 00:57:10.637013       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:57:10.637036       1 main.go:227] handling current node
	I1026 00:57:20.644227       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:57:20.644252       1 main.go:227] handling current node
	I1026 00:57:30.647943       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:57:30.647966       1 main.go:227] handling current node
	I1026 00:57:40.660366       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:57:40.660393       1 main.go:227] handling current node
	I1026 00:57:50.669228       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:57:50.669251       1 main.go:227] handling current node
	I1026 00:58:00.680572       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:58:00.680594       1 main.go:227] handling current node
	I1026 00:58:10.690347       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:58:10.690380       1 main.go:227] handling current node
	I1026 00:58:20.693892       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:58:20.693918       1 main.go:227] handling current node
	I1026 00:58:30.704784       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:58:30.704808       1 main.go:227] handling current node
	I1026 00:58:40.716824       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:58:40.716846       1 main.go:227] handling current node
	I1026 00:58:50.728929       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:58:50.728957       1 main.go:227] handling current node
	I1026 00:59:00.740984       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:59:00.741007       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3] <==
	* I1026 00:56:29.209282       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1026 00:56:29.510021       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.0.125"}
	I1026 00:56:29.682598       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1026 00:56:37.120931       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1026 00:56:45.834509       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1026 00:57:12.870676       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:57:12.870722       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:57:12.877171       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:57:12.877317       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:57:12.886234       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:57:12.886413       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:57:12.891885       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:57:12.891926       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:57:12.899565       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:57:12.899690       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:57:12.901930       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:57:12.902212       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:57:12.908341       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:57:12.908382       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:57:12.914013       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:57:12.914053       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1026 00:57:13.892620       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1026 00:57:13.914672       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1026 00:57:13.917550       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1026 00:58:50.987631       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.237.3"}
	
	* 
	* ==> kube-controller-manager [ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5] <==
	* W1026 00:57:33.541885       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:57:33.541922       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1026 00:57:44.999573       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:57:44.999606       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1026 00:57:45.697777       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:57:45.697807       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1026 00:57:47.992601       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:57:47.992631       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1026 00:58:18.618658       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:58:18.618697       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1026 00:58:34.695354       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:58:34.695384       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1026 00:58:35.036170       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:58:35.036198       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1026 00:58:50.832844       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1026 00:58:50.842129       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-n946z"
	I1026 00:58:50.849877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="17.248914ms"
	I1026 00:58:50.854892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="4.898948ms"
	I1026 00:58:50.855001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.921µs"
	I1026 00:58:50.861758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="106.053µs"
	I1026 00:58:53.010852       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1026 00:58:53.012351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="7.27µs"
	I1026 00:58:53.016118       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1026 00:58:53.440712       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="4.979442ms"
	I1026 00:58:53.440796       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="47.874µs"
	
	* 
	* ==> kube-proxy [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7] <==
	* I1026 00:54:48.512806       1 server_others.go:69] "Using iptables proxy"
	I1026 00:54:48.712479       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1026 00:54:50.810582       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 00:54:50.895332       1 server_others.go:152] "Using iptables Proxier"
	I1026 00:54:50.895455       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 00:54:50.895513       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 00:54:50.895581       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 00:54:50.895829       1 server.go:846] "Version info" version="v1.28.3"
	I1026 00:54:50.896070       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:54:50.896917       1 config.go:188] "Starting service config controller"
	I1026 00:54:50.897013       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 00:54:50.897081       1 config.go:97] "Starting endpoint slice config controller"
	I1026 00:54:50.897126       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 00:54:50.897729       1 config.go:315] "Starting node config controller"
	I1026 00:54:50.897798       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 00:54:50.997389       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 00:54:50.997452       1 shared_informer.go:318] Caches are synced for service config
	I1026 00:54:50.997859       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4] <==
	* W1026 00:54:30.509691       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 00:54:30.510202       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1026 00:54:30.509764       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 00:54:30.510217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1026 00:54:30.509822       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 00:54:30.510232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 00:54:30.509887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 00:54:30.510248       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1026 00:54:30.510328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 00:54:30.510341       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1026 00:54:30.510478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1026 00:54:30.510520       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1026 00:54:31.324635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 00:54:31.324669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 00:54:31.368981       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 00:54:31.369013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1026 00:54:31.476556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 00:54:31.476599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 00:54:31.507261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 00:54:31.507302       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1026 00:54:31.514477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1026 00:54:31.514510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1026 00:54:31.526718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 00:54:31.526749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1026 00:54:31.902593       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 26 00:58:50 addons-211632 kubelet[1560]: I1026 00:58:50.915793    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6vr\" (UniqueName: \"kubernetes.io/projected/21e511d4-48f7-4357-8f66-9f4e49192399-kube-api-access-8h6vr\") pod \"hello-world-app-5d77478584-n946z\" (UID: \"21e511d4-48f7-4357-8f66-9f4e49192399\") " pod="default/hello-world-app-5d77478584-n946z"
	Oct 26 00:58:50 addons-211632 kubelet[1560]: I1026 00:58:50.915863    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/21e511d4-48f7-4357-8f66-9f4e49192399-gcp-creds\") pod \"hello-world-app-5d77478584-n946z\" (UID: \"21e511d4-48f7-4357-8f66-9f4e49192399\") " pod="default/hello-world-app-5d77478584-n946z"
	Oct 26 00:58:51 addons-211632 kubelet[1560]: W1026 00:58:51.254464    1560 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1/crio-3397c37301425974acd92d2d8ac2ae661757d1c06b0b207917db9b99402abcba WatchSource:0}: Error finding container 3397c37301425974acd92d2d8ac2ae661757d1c06b0b207917db9b99402abcba: Status 404 returned error can't find the container with id 3397c37301425974acd92d2d8ac2ae661757d1c06b0b207917db9b99402abcba
	Oct 26 00:58:52 addons-211632 kubelet[1560]: I1026 00:58:52.224469    1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch2sr\" (UniqueName: \"kubernetes.io/projected/aa4700ad-2b9b-40e1-91ea-7472194766c1-kube-api-access-ch2sr\") pod \"aa4700ad-2b9b-40e1-91ea-7472194766c1\" (UID: \"aa4700ad-2b9b-40e1-91ea-7472194766c1\") "
	Oct 26 00:58:52 addons-211632 kubelet[1560]: I1026 00:58:52.226235    1560 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa4700ad-2b9b-40e1-91ea-7472194766c1-kube-api-access-ch2sr" (OuterVolumeSpecName: "kube-api-access-ch2sr") pod "aa4700ad-2b9b-40e1-91ea-7472194766c1" (UID: "aa4700ad-2b9b-40e1-91ea-7472194766c1"). InnerVolumeSpecName "kube-api-access-ch2sr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 26 00:58:52 addons-211632 kubelet[1560]: I1026 00:58:52.325817    1560 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ch2sr\" (UniqueName: \"kubernetes.io/projected/aa4700ad-2b9b-40e1-91ea-7472194766c1-kube-api-access-ch2sr\") on node \"addons-211632\" DevicePath \"\""
	Oct 26 00:58:52 addons-211632 kubelet[1560]: I1026 00:58:52.420518    1560 scope.go:117] "RemoveContainer" containerID="ebd24d2a6abd5fc6593e66e22ac631cada7e80898c212748f82ec569dfbc8685"
	Oct 26 00:58:52 addons-211632 kubelet[1560]: I1026 00:58:52.441587    1560 scope.go:117] "RemoveContainer" containerID="ebd24d2a6abd5fc6593e66e22ac631cada7e80898c212748f82ec569dfbc8685"
	Oct 26 00:58:52 addons-211632 kubelet[1560]: E1026 00:58:52.442032    1560 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd24d2a6abd5fc6593e66e22ac631cada7e80898c212748f82ec569dfbc8685\": container with ID starting with ebd24d2a6abd5fc6593e66e22ac631cada7e80898c212748f82ec569dfbc8685 not found: ID does not exist" containerID="ebd24d2a6abd5fc6593e66e22ac631cada7e80898c212748f82ec569dfbc8685"
	Oct 26 00:58:52 addons-211632 kubelet[1560]: I1026 00:58:52.442086    1560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd24d2a6abd5fc6593e66e22ac631cada7e80898c212748f82ec569dfbc8685"} err="failed to get container status \"ebd24d2a6abd5fc6593e66e22ac631cada7e80898c212748f82ec569dfbc8685\": rpc error: code = NotFound desc = could not find container \"ebd24d2a6abd5fc6593e66e22ac631cada7e80898c212748f82ec569dfbc8685\": container with ID starting with ebd24d2a6abd5fc6593e66e22ac631cada7e80898c212748f82ec569dfbc8685 not found: ID does not exist"
	Oct 26 00:58:53 addons-211632 kubelet[1560]: I1026 00:58:53.309059    1560 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0a1992f9-8ade-4693-b88a-bf433f2fff55" path="/var/lib/kubelet/pods/0a1992f9-8ade-4693-b88a-bf433f2fff55/volumes"
	Oct 26 00:58:53 addons-211632 kubelet[1560]: I1026 00:58:53.309504    1560 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="aa4700ad-2b9b-40e1-91ea-7472194766c1" path="/var/lib/kubelet/pods/aa4700ad-2b9b-40e1-91ea-7472194766c1/volumes"
	Oct 26 00:58:53 addons-211632 kubelet[1560]: I1026 00:58:53.309884    1560 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b37e8f21-e41b-4bc0-be30-ac5be9979b5b" path="/var/lib/kubelet/pods/b37e8f21-e41b-4bc0-be30-ac5be9979b5b/volumes"
	Oct 26 00:58:53 addons-211632 kubelet[1560]: I1026 00:58:53.435983    1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-n946z" podStartSLOduration=2.382084527 podCreationTimestamp="2023-10-26 00:58:50 +0000 UTC" firstStartedPulling="2023-10-26 00:58:51.291775049 +0000 UTC m=+258.067690527" lastFinishedPulling="2023-10-26 00:58:52.345634586 +0000 UTC m=+259.121550065" observedRunningTime="2023-10-26 00:58:53.435553991 +0000 UTC m=+260.211469477" watchObservedRunningTime="2023-10-26 00:58:53.435944065 +0000 UTC m=+260.211859552"
	Oct 26 00:58:56 addons-211632 kubelet[1560]: I1026 00:58:56.314934    1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s26cv\" (UniqueName: \"kubernetes.io/projected/adde4baa-3237-45cc-b962-7a85220e5af7-kube-api-access-s26cv\") pod \"adde4baa-3237-45cc-b962-7a85220e5af7\" (UID: \"adde4baa-3237-45cc-b962-7a85220e5af7\") "
	Oct 26 00:58:56 addons-211632 kubelet[1560]: I1026 00:58:56.314998    1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adde4baa-3237-45cc-b962-7a85220e5af7-webhook-cert\") pod \"adde4baa-3237-45cc-b962-7a85220e5af7\" (UID: \"adde4baa-3237-45cc-b962-7a85220e5af7\") "
	Oct 26 00:58:56 addons-211632 kubelet[1560]: I1026 00:58:56.316982    1560 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adde4baa-3237-45cc-b962-7a85220e5af7-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "adde4baa-3237-45cc-b962-7a85220e5af7" (UID: "adde4baa-3237-45cc-b962-7a85220e5af7"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 26 00:58:56 addons-211632 kubelet[1560]: I1026 00:58:56.316994    1560 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adde4baa-3237-45cc-b962-7a85220e5af7-kube-api-access-s26cv" (OuterVolumeSpecName: "kube-api-access-s26cv") pod "adde4baa-3237-45cc-b962-7a85220e5af7" (UID: "adde4baa-3237-45cc-b962-7a85220e5af7"). InnerVolumeSpecName "kube-api-access-s26cv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 26 00:58:56 addons-211632 kubelet[1560]: I1026 00:58:56.416020    1560 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adde4baa-3237-45cc-b962-7a85220e5af7-webhook-cert\") on node \"addons-211632\" DevicePath \"\""
	Oct 26 00:58:56 addons-211632 kubelet[1560]: I1026 00:58:56.416059    1560 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s26cv\" (UniqueName: \"kubernetes.io/projected/adde4baa-3237-45cc-b962-7a85220e5af7-kube-api-access-s26cv\") on node \"addons-211632\" DevicePath \"\""
	Oct 26 00:58:56 addons-211632 kubelet[1560]: I1026 00:58:56.432970    1560 scope.go:117] "RemoveContainer" containerID="c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92"
	Oct 26 00:58:56 addons-211632 kubelet[1560]: I1026 00:58:56.448961    1560 scope.go:117] "RemoveContainer" containerID="c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92"
	Oct 26 00:58:56 addons-211632 kubelet[1560]: E1026 00:58:56.449329    1560 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92\": container with ID starting with c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92 not found: ID does not exist" containerID="c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92"
	Oct 26 00:58:56 addons-211632 kubelet[1560]: I1026 00:58:56.449375    1560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92"} err="failed to get container status \"c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92\": rpc error: code = NotFound desc = could not find container \"c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92\": container with ID starting with c6bc9dc3e219a4af2abc2d744e3ebccdeb8a17f46540ae61abb96ed8844d3d92 not found: ID does not exist"
	Oct 26 00:58:57 addons-211632 kubelet[1560]: I1026 00:58:57.309474    1560 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="adde4baa-3237-45cc-b962-7a85220e5af7" path="/var/lib/kubelet/pods/adde4baa-3237-45cc-b962-7a85220e5af7/volumes"
	
	* 
	* ==> storage-provisioner [f8417ef628e7e60f63892414fa42ad6a118481875325bbe75dcf8165c6387f45] <==
	* I1026 00:55:21.715098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 00:55:21.722579       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 00:55:21.722723       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 00:55:21.732375       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 00:55:21.732437       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0c4e8849-6c49-4d55-8889-30991b4ff466", APIVersion:"v1", ResourceVersion:"891", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-211632_10c8f30d-333e-4a05-ac23-355f0e36840c became leader
	I1026 00:55:21.732602       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-211632_10c8f30d-333e-4a05-ac23-355f0e36840c!
	I1026 00:55:21.833886       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-211632_10c8f30d-333e-4a05-ac23-355f0e36840c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-211632 -n addons-211632
helpers_test.go:261: (dbg) Run:  kubectl --context addons-211632 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (8.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mlbw5" [dbfa30e5-7b11-4259-96cd-ee7e37b4e485] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010443415s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-211632
addons_test.go:840: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-211632: exit status 11 (254.183647ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-10-26T00:56:31Z" level=error msg="stat /run/runc/ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:841: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-211632" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-211632
helpers_test.go:235: (dbg) docker inspect addons-211632:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1",
	        "Created": "2023-10-26T00:54:16.451511291Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16821,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T00:54:16.745965318Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1/hosts",
	        "LogPath": "/var/lib/docker/containers/c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1/c35c1efbfb1308521a0fad3c55b09428dda85e5e3a6610a52bcdc6463385c9e1-json.log",
	        "Name": "/addons-211632",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-211632:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-211632",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d5f8f2ff13ef010d1eeb0dcf7693776bdfa0d2948114563624701a77b5421ecd-init/diff:/var/lib/docker/overlay2/007d7e88bd091d08c1a177e3000477192ad6785f5c636023d34df0777872a721/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5f8f2ff13ef010d1eeb0dcf7693776bdfa0d2948114563624701a77b5421ecd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5f8f2ff13ef010d1eeb0dcf7693776bdfa0d2948114563624701a77b5421ecd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5f8f2ff13ef010d1eeb0dcf7693776bdfa0d2948114563624701a77b5421ecd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-211632",
	                "Source": "/var/lib/docker/volumes/addons-211632/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-211632",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-211632",
	                "name.minikube.sigs.k8s.io": "addons-211632",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d474e9f4630ed1951b26df644f78270f76beb39a9e3abbc81b1744a46066432",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5d474e9f4630",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-211632": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c35c1efbfb13",
	                        "addons-211632"
	                    ],
	                    "NetworkID": "b957c5cf203521d5b26819ec1325095eba54611228466abbd505078bd4f5873a",
	                    "EndpointID": "59f2864850b537f7432eb1a950e1ba4fbdd9aa7a46b5eb4d2666aa3dc4dce0a2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-211632 -n addons-211632
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-211632 logs -n 25: (2.609957846s)
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-179503   | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |                     |
	|         | -p download-only-179503                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-179503   | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |                     |
	|         | -p download-only-179503                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC | 26 Oct 23 00:53 UTC |
	| delete  | -p download-only-179503                                                                     | download-only-179503   | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC | 26 Oct 23 00:53 UTC |
	| delete  | -p download-only-179503                                                                     | download-only-179503   | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC | 26 Oct 23 00:53 UTC |
	| start   | --download-only -p                                                                          | download-docker-912806 | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |                     |
	|         | download-docker-912806                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-912806                                                                   | download-docker-912806 | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC | 26 Oct 23 00:53 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-014731   | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |                     |
	|         | binary-mirror-014731                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40063                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-014731                                                                     | binary-mirror-014731   | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC | 26 Oct 23 00:53 UTC |
	| addons  | disable dashboard -p                                                                        | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |                     |
	|         | addons-211632                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |                     |
	|         | addons-211632                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-211632 --wait=true                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC | 26 Oct 23 00:56 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | -p addons-211632                                                                            |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | -p addons-211632                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-211632 addons disable                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-211632 ssh cat                                                                       | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | /opt/local-path-provisioner/pvc-12cb842a-8d18-426c-8f30-ad9da7858417_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-211632 addons disable                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | addons-211632                                                                               |                        |         |         |                     |                     |
	| ip      | addons-211632 ip                                                                            | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	| addons  | addons-211632 addons disable                                                                | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-211632 addons                                                                        | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC | 26 Oct 23 00:56 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-211632          | jenkins | v1.31.2 | 26 Oct 23 00:56 UTC |                     |
	|         | addons-211632                                                                               |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/26 00:53:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:53:52.159138   16147 out.go:296] Setting OutFile to fd 1 ...
	I1026 00:53:52.159281   16147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 00:53:52.159291   16147 out.go:309] Setting ErrFile to fd 2...
	I1026 00:53:52.159295   16147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 00:53:52.159469   16147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	I1026 00:53:52.160081   16147 out.go:303] Setting JSON to false
	I1026 00:53:52.160904   16147 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2178,"bootTime":1698279454,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:53:52.160970   16147 start.go:138] virtualization: kvm guest
	I1026 00:53:52.163527   16147 out.go:177] * [addons-211632] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:53:52.165365   16147 out.go:177]   - MINIKUBE_LOCATION=17491
	I1026 00:53:52.165328   16147 notify.go:220] Checking for updates...
	I1026 00:53:52.168739   16147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:53:52.170321   16147 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 00:53:52.172150   16147 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	I1026 00:53:52.174016   16147 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 00:53:52.175520   16147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:53:52.177260   16147 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 00:53:52.199075   16147 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1026 00:53:52.199155   16147 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:53:52.251056   16147 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-10-26 00:53:52.242458271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 00:53:52.251170   16147 docker.go:295] overlay module found
	I1026 00:53:52.253489   16147 out.go:177] * Using the docker driver based on user configuration
	I1026 00:53:52.255231   16147 start.go:298] selected driver: docker
	I1026 00:53:52.255250   16147 start.go:902] validating driver "docker" against <nil>
	I1026 00:53:52.255262   16147 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:53:52.256070   16147 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:53:52.306224   16147 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-10-26 00:53:52.297800414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 00:53:52.306445   16147 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1026 00:53:52.306654   16147 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 00:53:52.308690   16147 out.go:177] * Using Docker driver with root privileges
	I1026 00:53:52.310657   16147 cni.go:84] Creating CNI manager for ""
	I1026 00:53:52.310685   16147 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 00:53:52.310702   16147 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 00:53:52.310718   16147 start_flags.go:323] config:
	{Name:addons-211632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-211632 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 00:53:52.312619   16147 out.go:177] * Starting control plane node addons-211632 in cluster addons-211632
	I1026 00:53:52.314378   16147 cache.go:121] Beginning downloading kic base image for docker with crio
	I1026 00:53:52.316065   16147 out.go:177] * Pulling base image ...
	I1026 00:53:52.317726   16147 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 00:53:52.317779   16147 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1026 00:53:52.317793   16147 cache.go:56] Caching tarball of preloaded images
	I1026 00:53:52.317901   16147 preload.go:174] Found /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 00:53:52.317915   16147 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1026 00:53:52.317895   16147 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1026 00:53:52.318311   16147 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/config.json ...
	I1026 00:53:52.318338   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/config.json: {Name:mk9ebe6d7e171a85ebe7053e9ea40c2a25508f10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:53:52.334000   16147 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1026 00:53:52.334114   16147 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1026 00:53:52.334134   16147 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1026 00:53:52.334140   16147 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1026 00:53:52.334153   16147 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1026 00:53:52.334164   16147 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from local cache
	I1026 00:54:03.408299   16147 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from cached tarball
	I1026 00:54:03.408324   16147 cache.go:194] Successfully downloaded all kic artifacts
	I1026 00:54:03.408352   16147 start.go:365] acquiring machines lock for addons-211632: {Name:mkffd89f32a0bb9cab225acc87f1ded3e2ae28fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:54:03.408445   16147 start.go:369] acquired machines lock for "addons-211632" in 71.984µs
	I1026 00:54:03.408467   16147 start.go:93] Provisioning new machine with config: &{Name:addons-211632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-211632 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:54:03.408564   16147 start.go:125] createHost starting for "" (driver="docker")
	I1026 00:54:03.410739   16147 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1026 00:54:03.410988   16147 start.go:159] libmachine.API.Create for "addons-211632" (driver="docker")
	I1026 00:54:03.411020   16147 client.go:168] LocalClient.Create starting
	I1026 00:54:03.411106   16147 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem
	I1026 00:54:03.566148   16147 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem
	I1026 00:54:03.908085   16147 cli_runner.go:164] Run: docker network inspect addons-211632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 00:54:03.923333   16147 cli_runner.go:211] docker network inspect addons-211632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 00:54:03.923392   16147 network_create.go:281] running [docker network inspect addons-211632] to gather additional debugging logs...
	I1026 00:54:03.923408   16147 cli_runner.go:164] Run: docker network inspect addons-211632
	W1026 00:54:03.937798   16147 cli_runner.go:211] docker network inspect addons-211632 returned with exit code 1
	I1026 00:54:03.937828   16147 network_create.go:284] error running [docker network inspect addons-211632]: docker network inspect addons-211632: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-211632 not found
	I1026 00:54:03.937840   16147 network_create.go:286] output of [docker network inspect addons-211632]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-211632 not found
	
	** /stderr **
	I1026 00:54:03.937931   16147 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 00:54:03.953203   16147 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0027f9da0}
	I1026 00:54:03.953231   16147 network_create.go:124] attempt to create docker network addons-211632 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 00:54:03.953265   16147 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-211632 addons-211632
	I1026 00:54:04.001100   16147 network_create.go:108] docker network addons-211632 192.168.49.0/24 created
	I1026 00:54:04.001143   16147 kic.go:121] calculated static IP "192.168.49.2" for the "addons-211632" container
	I1026 00:54:04.001195   16147 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 00:54:04.015563   16147 cli_runner.go:164] Run: docker volume create addons-211632 --label name.minikube.sigs.k8s.io=addons-211632 --label created_by.minikube.sigs.k8s.io=true
	I1026 00:54:04.031210   16147 oci.go:103] Successfully created a docker volume addons-211632
	I1026 00:54:04.031283   16147 cli_runner.go:164] Run: docker run --rm --name addons-211632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-211632 --entrypoint /usr/bin/test -v addons-211632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1026 00:54:11.257879   16147 cli_runner.go:217] Completed: docker run --rm --name addons-211632-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-211632 --entrypoint /usr/bin/test -v addons-211632:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (7.226551351s)
	I1026 00:54:11.257912   16147 oci.go:107] Successfully prepared a docker volume addons-211632
	I1026 00:54:11.257933   16147 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 00:54:11.257957   16147 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 00:54:11.258023   16147 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-211632:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 00:54:16.385612   16147 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-211632:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.127512931s)
	I1026 00:54:16.385646   16147 kic.go:203] duration metric: took 5.127687 seconds to extract preloaded images to volume
	W1026 00:54:16.385813   16147 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 00:54:16.385920   16147 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 00:54:16.437604   16147 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-211632 --name addons-211632 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-211632 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-211632 --network addons-211632 --ip 192.168.49.2 --volume addons-211632:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1026 00:54:16.754311   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Running}}
	I1026 00:54:16.772159   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:16.789500   16147 cli_runner.go:164] Run: docker exec addons-211632 stat /var/lib/dpkg/alternatives/iptables
	I1026 00:54:16.855179   16147 oci.go:144] the created container "addons-211632" has a running status.
	I1026 00:54:16.855211   16147 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa...
	I1026 00:54:16.979233   16147 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 00:54:16.999914   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:17.018862   16147 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 00:54:17.018887   16147 kic_runner.go:114] Args: [docker exec --privileged addons-211632 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 00:54:17.087888   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:17.108723   16147 machine.go:88] provisioning docker machine ...
	I1026 00:54:17.108780   16147 ubuntu.go:169] provisioning hostname "addons-211632"
	I1026 00:54:17.108879   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:17.130599   16147 main.go:141] libmachine: Using SSH client type: native
	I1026 00:54:17.131077   16147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1026 00:54:17.131104   16147 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-211632 && echo "addons-211632" | sudo tee /etc/hostname
	I1026 00:54:17.132983   16147 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48492->127.0.0.1:32772: read: connection reset by peer
	I1026 00:54:20.271408   16147 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-211632
	
	I1026 00:54:20.271496   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:20.287517   16147 main.go:141] libmachine: Using SSH client type: native
	I1026 00:54:20.287986   16147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1026 00:54:20.288010   16147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-211632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-211632/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-211632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 00:54:20.405628   16147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 00:54:20.405662   16147 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17491-8444/.minikube CaCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17491-8444/.minikube}
	I1026 00:54:20.405705   16147 ubuntu.go:177] setting up certificates
	I1026 00:54:20.405715   16147 provision.go:83] configureAuth start
	I1026 00:54:20.405761   16147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-211632
	I1026 00:54:20.422562   16147 provision.go:138] copyHostCerts
	I1026 00:54:20.422641   16147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem (1078 bytes)
	I1026 00:54:20.422748   16147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem (1123 bytes)
	I1026 00:54:20.422806   16147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem (1675 bytes)
	I1026 00:54:20.422871   16147 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem org=jenkins.addons-211632 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-211632]
	I1026 00:54:20.630879   16147 provision.go:172] copyRemoteCerts
	I1026 00:54:20.630932   16147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 00:54:20.630970   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:20.648089   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:20.737752   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 00:54:20.759358   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1026 00:54:20.780982   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 00:54:20.801713   16147 provision.go:86] duration metric: configureAuth took 395.974656ms
	I1026 00:54:20.801739   16147 ubuntu.go:193] setting minikube options for container-runtime
	I1026 00:54:20.801936   16147 config.go:182] Loaded profile config "addons-211632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 00:54:20.802042   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:20.818716   16147 main.go:141] libmachine: Using SSH client type: native
	I1026 00:54:20.819051   16147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1026 00:54:20.819077   16147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 00:54:21.024557   16147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 00:54:21.024583   16147 machine.go:91] provisioned docker machine in 3.915827028s
	I1026 00:54:21.024597   16147 client.go:171] LocalClient.Create took 17.613565234s
	I1026 00:54:21.024613   16147 start.go:167] duration metric: libmachine.API.Create for "addons-211632" took 17.613625593s
	I1026 00:54:21.024639   16147 start.go:300] post-start starting for "addons-211632" (driver="docker")
	I1026 00:54:21.024655   16147 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 00:54:21.024706   16147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 00:54:21.024748   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:21.041716   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:21.130200   16147 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 00:54:21.133166   16147 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 00:54:21.133196   16147 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1026 00:54:21.133205   16147 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1026 00:54:21.133212   16147 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1026 00:54:21.133225   16147 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/addons for local assets ...
	I1026 00:54:21.133305   16147 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/files for local assets ...
	I1026 00:54:21.133338   16147 start.go:303] post-start completed in 108.687187ms
	I1026 00:54:21.133658   16147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-211632
	I1026 00:54:21.149968   16147 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/config.json ...
	I1026 00:54:21.150256   16147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 00:54:21.150310   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:21.166359   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:21.254471   16147 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 00:54:21.258527   16147 start.go:128] duration metric: createHost completed in 17.849950332s
	I1026 00:54:21.258554   16147 start.go:83] releasing machines lock for "addons-211632", held for 17.850096768s
	I1026 00:54:21.258619   16147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-211632
	I1026 00:54:21.274461   16147 ssh_runner.go:195] Run: cat /version.json
	I1026 00:54:21.274505   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:21.274532   16147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 00:54:21.274604   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:21.292422   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:21.292851   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:21.469819   16147 ssh_runner.go:195] Run: systemctl --version
	I1026 00:54:21.473748   16147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 00:54:21.609700   16147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 00:54:21.613697   16147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 00:54:21.630672   16147 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1026 00:54:21.630744   16147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 00:54:21.656912   16147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1026 00:54:21.656930   16147 start.go:472] detecting cgroup driver to use...
	I1026 00:54:21.656959   16147 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1026 00:54:21.657001   16147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 00:54:21.670361   16147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 00:54:21.680381   16147 docker.go:198] disabling cri-docker service (if available) ...
	I1026 00:54:21.680434   16147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 00:54:21.692359   16147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 00:54:21.704851   16147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 00:54:21.777801   16147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 00:54:21.861277   16147 docker.go:214] disabling docker service ...
	I1026 00:54:21.861341   16147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 00:54:21.877820   16147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 00:54:21.887713   16147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 00:54:21.964385   16147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 00:54:22.041917   16147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 00:54:22.051521   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 00:54:22.064776   16147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 00:54:22.064830   16147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:54:22.072920   16147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 00:54:22.072990   16147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:54:22.081236   16147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:54:22.089375   16147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:54:22.097702   16147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 00:54:22.105324   16147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 00:54:22.112591   16147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 00:54:22.119960   16147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 00:54:22.192893   16147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 00:54:22.287207   16147 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 00:54:22.287268   16147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 00:54:22.290323   16147 start.go:540] Will wait 60s for crictl version
	I1026 00:54:22.290368   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:54:22.293169   16147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 00:54:22.323730   16147 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1026 00:54:22.323822   16147 ssh_runner.go:195] Run: crio --version
	I1026 00:54:22.358001   16147 ssh_runner.go:195] Run: crio --version
	I1026 00:54:22.391795   16147 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1026 00:54:22.393102   16147 cli_runner.go:164] Run: docker network inspect addons-211632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 00:54:22.408233   16147 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 00:54:22.411487   16147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 00:54:22.421221   16147 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 00:54:22.421281   16147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 00:54:22.473416   16147 crio.go:496] all images are preloaded for cri-o runtime.
	I1026 00:54:22.473438   16147 crio.go:415] Images already preloaded, skipping extraction
	I1026 00:54:22.473485   16147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 00:54:22.505431   16147 crio.go:496] all images are preloaded for cri-o runtime.
	I1026 00:54:22.505452   16147 cache_images.go:84] Images are preloaded, skipping loading
	I1026 00:54:22.505503   16147 ssh_runner.go:195] Run: crio config
	I1026 00:54:22.546519   16147 cni.go:84] Creating CNI manager for ""
	I1026 00:54:22.546538   16147 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 00:54:22.546556   16147 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1026 00:54:22.546574   16147 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-211632 NodeName:addons-211632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 00:54:22.546730   16147 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-211632"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 00:54:22.546791   16147 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-211632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-211632 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1026 00:54:22.546850   16147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1026 00:54:22.554704   16147 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 00:54:22.554772   16147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 00:54:22.562102   16147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1026 00:54:22.577028   16147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 00:54:22.592204   16147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1026 00:54:22.607512   16147 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 00:54:22.610502   16147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 00:54:22.620217   16147 certs.go:56] Setting up /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632 for IP: 192.168.49.2
	I1026 00:54:22.620256   16147 certs.go:190] acquiring lock for shared ca certs: {Name:mk5c45c423cc5a6761a0ccf5b25a0c8b531fe271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.620389   16147 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key
	I1026 00:54:22.679611   16147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt ...
	I1026 00:54:22.679639   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt: {Name:mk2276d3b00ed6731a6512cf41e99b72143bec5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.679822   16147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key ...
	I1026 00:54:22.679837   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key: {Name:mkffdebe349966b741a3a7f33073ebaa3f212967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.679930   16147 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key
	I1026 00:54:22.854803   16147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt ...
	I1026 00:54:22.854832   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt: {Name:mk6cdb0cf01b90dfd65a171999802d0e49391e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.855006   16147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key ...
	I1026 00:54:22.855024   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key: {Name:mkd5f6bb5f1850cfd0aa58b7ac491a1c9abef6c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.855155   16147 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.key
	I1026 00:54:22.855176   16147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt with IP's: []
	I1026 00:54:22.936275   16147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt ...
	I1026 00:54:22.936313   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: {Name:mkfafa2f462e4f8bcccc960086af046d3433937e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.936506   16147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.key ...
	I1026 00:54:22.936523   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.key: {Name:mk0b1b74286262dd198bd82f49ce42289d234d1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:22.936611   16147 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key.dd3b5fb2
	I1026 00:54:22.936633   16147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1026 00:54:23.136175   16147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt.dd3b5fb2 ...
	I1026 00:54:23.136207   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt.dd3b5fb2: {Name:mk6a266751544189db1c6ee27b8593b320cc7c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:23.136384   16147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key.dd3b5fb2 ...
	I1026 00:54:23.136401   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key.dd3b5fb2: {Name:mk34b1c3bd7489b3f5fc9661bb8d9662105dca11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:23.136495   16147 certs.go:337] copying /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt
	I1026 00:54:23.136594   16147 certs.go:341] copying /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key
	I1026 00:54:23.136663   16147 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.key
	I1026 00:54:23.136686   16147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.crt with IP's: []
	I1026 00:54:23.327259   16147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.crt ...
	I1026 00:54:23.327293   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.crt: {Name:mk0a5db21acd4ad77bf4b4b7939dae3a538fd59a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:23.327465   16147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.key ...
	I1026 00:54:23.327481   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.key: {Name:mk94a40f616d1c51d423530177e2f1a80764a1f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:23.327688   16147 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 00:54:23.327723   16147 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem (1078 bytes)
	I1026 00:54:23.327747   16147 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem (1123 bytes)
	I1026 00:54:23.327782   16147 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem (1675 bytes)
	I1026 00:54:23.328456   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1026 00:54:23.349801   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 00:54:23.370163   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 00:54:23.391403   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 00:54:23.413055   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 00:54:23.435385   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 00:54:23.456894   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 00:54:23.476970   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1026 00:54:23.496645   16147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 00:54:23.516687   16147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 00:54:23.531250   16147 ssh_runner.go:195] Run: openssl version
	I1026 00:54:23.536060   16147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 00:54:23.543931   16147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:54:23.546914   16147 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:54 /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:54:23.546975   16147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:54:23.552747   16147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 00:54:23.560630   16147 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1026 00:54:23.563477   16147 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1026 00:54:23.563528   16147 kubeadm.go:404] StartCluster: {Name:addons-211632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-211632 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 00:54:23.563612   16147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 00:54:23.563646   16147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 00:54:23.595381   16147 cri.go:89] found id: ""
	I1026 00:54:23.595445   16147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 00:54:23.603239   16147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 00:54:23.610804   16147 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1026 00:54:23.610870   16147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 00:54:23.618264   16147 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 00:54:23.618308   16147 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 00:54:23.658523   16147 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1026 00:54:23.658766   16147 kubeadm.go:322] [preflight] Running pre-flight checks
	I1026 00:54:23.691518   16147 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1026 00:54:23.691612   16147 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-gcp
	I1026 00:54:23.691662   16147 kubeadm.go:322] OS: Linux
	I1026 00:54:23.691733   16147 kubeadm.go:322] CGROUPS_CPU: enabled
	I1026 00:54:23.691790   16147 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1026 00:54:23.691842   16147 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1026 00:54:23.691882   16147 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1026 00:54:23.691922   16147 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1026 00:54:23.691990   16147 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1026 00:54:23.692058   16147 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1026 00:54:23.692123   16147 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1026 00:54:23.692188   16147 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1026 00:54:23.751329   16147 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 00:54:23.751467   16147 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 00:54:23.751548   16147 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 00:54:23.947501   16147 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 00:54:23.950872   16147 out.go:204]   - Generating certificates and keys ...
	I1026 00:54:23.951061   16147 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1026 00:54:23.951177   16147 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1026 00:54:24.180825   16147 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 00:54:24.315262   16147 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1026 00:54:24.487487   16147 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1026 00:54:24.750022   16147 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1026 00:54:24.912402   16147 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1026 00:54:24.912578   16147 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-211632 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 00:54:25.263176   16147 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1026 00:54:25.263330   16147 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-211632 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 00:54:25.584508   16147 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 00:54:25.895008   16147 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 00:54:26.181187   16147 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1026 00:54:26.181321   16147 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 00:54:26.261256   16147 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 00:54:26.599612   16147 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 00:54:26.764284   16147 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 00:54:26.877979   16147 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 00:54:26.878412   16147 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 00:54:26.881295   16147 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 00:54:26.883643   16147 out.go:204]   - Booting up control plane ...
	I1026 00:54:26.883801   16147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 00:54:26.883922   16147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 00:54:26.884026   16147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 00:54:26.891579   16147 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 00:54:26.892321   16147 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 00:54:26.892414   16147 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1026 00:54:26.970752   16147 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 00:54:31.972630   16147 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001966 seconds
	I1026 00:54:31.972736   16147 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 00:54:31.983523   16147 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 00:54:32.503420   16147 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 00:54:32.503636   16147 kubeadm.go:322] [mark-control-plane] Marking the node addons-211632 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 00:54:33.012570   16147 kubeadm.go:322] [bootstrap-token] Using token: iibgbk.7hhnwxs03oqbvbv8
	I1026 00:54:33.014171   16147 out.go:204]   - Configuring RBAC rules ...
	I1026 00:54:33.014333   16147 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 00:54:33.019026   16147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 00:54:33.025603   16147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 00:54:33.028597   16147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 00:54:33.031072   16147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 00:54:33.033679   16147 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 00:54:33.046028   16147 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 00:54:33.262809   16147 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1026 00:54:33.422896   16147 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1026 00:54:33.423722   16147 kubeadm.go:322] 
	I1026 00:54:33.423843   16147 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1026 00:54:33.423865   16147 kubeadm.go:322] 
	I1026 00:54:33.423981   16147 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1026 00:54:33.423991   16147 kubeadm.go:322] 
	I1026 00:54:33.424036   16147 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1026 00:54:33.424122   16147 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 00:54:33.424200   16147 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 00:54:33.424211   16147 kubeadm.go:322] 
	I1026 00:54:33.424284   16147 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1026 00:54:33.424311   16147 kubeadm.go:322] 
	I1026 00:54:33.424401   16147 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 00:54:33.424423   16147 kubeadm.go:322] 
	I1026 00:54:33.424500   16147 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1026 00:54:33.424626   16147 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 00:54:33.424735   16147 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 00:54:33.424743   16147 kubeadm.go:322] 
	I1026 00:54:33.424842   16147 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 00:54:33.424940   16147 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1026 00:54:33.424954   16147 kubeadm.go:322] 
	I1026 00:54:33.425087   16147 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token iibgbk.7hhnwxs03oqbvbv8 \
	I1026 00:54:33.425226   16147 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa \
	I1026 00:54:33.425259   16147 kubeadm.go:322] 	--control-plane 
	I1026 00:54:33.425269   16147 kubeadm.go:322] 
	I1026 00:54:33.425376   16147 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1026 00:54:33.425386   16147 kubeadm.go:322] 
	I1026 00:54:33.425494   16147 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token iibgbk.7hhnwxs03oqbvbv8 \
	I1026 00:54:33.425633   16147 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa 
	I1026 00:54:33.427294   16147 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1026 00:54:33.427431   16147 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 00:54:33.427454   16147 cni.go:84] Creating CNI manager for ""
	I1026 00:54:33.427461   16147 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 00:54:33.429382   16147 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1026 00:54:33.430925   16147 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 00:54:33.434984   16147 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1026 00:54:33.435004   16147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1026 00:54:33.451183   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 00:54:34.094507   16147 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 00:54:34.094614   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:34.094643   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=af1d352f1030f8f3ea7f97e311e7fe82ef319942 minikube.k8s.io/name=addons-211632 minikube.k8s.io/updated_at=2023_10_26T00_54_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:34.198101   16147 ops.go:34] apiserver oom_adj: -16
	I1026 00:54:34.198256   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:34.260040   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:34.828312   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:35.328638   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:35.828253   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:36.328493   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:36.827832   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:37.328476   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:37.828428   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:38.328049   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:38.827755   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:39.328192   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:39.828360   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:40.327769   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:40.828643   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:41.327802   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:41.828030   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:42.327733   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:42.828231   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:43.328143   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:43.828747   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:44.328473   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:44.828646   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:45.328484   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:45.828201   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:46.328477   16147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:54:46.398759   16147 kubeadm.go:1081] duration metric: took 12.304197478s to wait for elevateKubeSystemPrivileges.
	I1026 00:54:46.398795   16147 kubeadm.go:406] StartCluster complete in 22.835270292s
	I1026 00:54:46.398817   16147 settings.go:142] acquiring lock: {Name:mk3f6a6b512050e15c823ee035bfa16b068e5bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:46.398933   16147 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 00:54:46.399564   16147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/kubeconfig: {Name:mkd7fc4e7a7060baa25a329208944605474cc380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:54:46.399796   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 00:54:46.399876   16147 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1026 00:54:46.400030   16147 addons.go:69] Setting volumesnapshots=true in profile "addons-211632"
	I1026 00:54:46.400039   16147 addons.go:69] Setting ingress-dns=true in profile "addons-211632"
	I1026 00:54:46.400058   16147 addons.go:231] Setting addon volumesnapshots=true in "addons-211632"
	I1026 00:54:46.400056   16147 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-211632"
	I1026 00:54:46.400071   16147 addons.go:69] Setting gcp-auth=true in profile "addons-211632"
	I1026 00:54:46.400091   16147 mustload.go:65] Loading cluster: addons-211632
	I1026 00:54:46.400108   16147 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-211632"
	I1026 00:54:46.400113   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.400108   16147 addons.go:69] Setting default-storageclass=true in profile "addons-211632"
	I1026 00:54:46.400140   16147 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-211632"
	I1026 00:54:46.400159   16147 addons.go:69] Setting cloud-spanner=true in profile "addons-211632"
	I1026 00:54:46.400657   16147 addons.go:231] Setting addon cloud-spanner=true in "addons-211632"
	I1026 00:54:46.400730   16147 config.go:182] Loaded profile config "addons-211632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 00:54:46.400763   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.400630   16147 config.go:182] Loaded profile config "addons-211632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 00:54:46.400804   16147 addons.go:69] Setting helm-tiller=true in profile "addons-211632"
	I1026 00:54:46.400826   16147 addons.go:231] Setting addon helm-tiller=true in "addons-211632"
	I1026 00:54:46.400853   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.401105   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.401125   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.401194   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.401511   16147 addons.go:69] Setting ingress=true in profile "addons-211632"
	I1026 00:54:46.401547   16147 addons.go:231] Setting addon ingress=true in "addons-211632"
	I1026 00:54:46.401627   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.402205   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.400062   16147 addons.go:231] Setting addon ingress-dns=true in "addons-211632"
	I1026 00:54:46.403021   16147 addons.go:69] Setting inspektor-gadget=true in profile "addons-211632"
	I1026 00:54:46.403036   16147 addons.go:231] Setting addon inspektor-gadget=true in "addons-211632"
	I1026 00:54:46.403089   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.403186   16147 addons.go:69] Setting metrics-server=true in profile "addons-211632"
	I1026 00:54:46.403197   16147 addons.go:231] Setting addon metrics-server=true in "addons-211632"
	I1026 00:54:46.403228   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.403529   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.404197   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.404220   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.404428   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.404958   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.400780   16147 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-211632"
	I1026 00:54:46.405261   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.405280   16147 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-211632"
	I1026 00:54:46.405807   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.406767   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.407114   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.407404   16147 addons.go:69] Setting registry=true in profile "addons-211632"
	I1026 00:54:46.407450   16147 addons.go:231] Setting addon registry=true in "addons-211632"
	I1026 00:54:46.407501   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.408000   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.408379   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.410837   16147 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-211632"
	I1026 00:54:46.410916   16147 addons.go:69] Setting storage-provisioner=true in profile "addons-211632"
	I1026 00:54:46.410953   16147 addons.go:231] Setting addon storage-provisioner=true in "addons-211632"
	I1026 00:54:46.411024   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.410865   16147 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-211632"
	I1026 00:54:46.427811   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.428442   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.439148   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.453651   16147 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1026 00:54:46.455473   16147 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1026 00:54:46.455493   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1026 00:54:46.455627   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.458993   16147 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1026 00:54:46.460341   16147 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 00:54:46.460374   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 00:54:46.460334   16147 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1026 00:54:46.461806   16147 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1026 00:54:46.461828   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1026 00:54:46.461873   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.460350   16147 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1026 00:54:46.460440   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.465736   16147 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1026 00:54:46.463767   16147 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 00:54:46.464386   16147 addons.go:231] Setting addon default-storageclass=true in "addons-211632"
	I1026 00:54:46.468469   16147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1026 00:54:46.467227   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 00:54:46.467274   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.467318   16147 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 00:54:46.470320   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1026 00:54:46.470392   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.470589   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.473190   16147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.3
	I1026 00:54:46.472343   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.472581   16147 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-211632"
	I1026 00:54:46.478500   16147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1026 00:54:46.478541   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:46.481542   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 00:54:46.480301   16147 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 00:54:46.480643   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:46.484108   16147 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1026 00:54:46.482984   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1026 00:54:46.484188   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.486533   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 00:54:46.488425   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 00:54:46.488388   16147 out.go:177]   - Using image docker.io/registry:2.8.3
	I1026 00:54:46.488304   16147 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1026 00:54:46.489986   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 00:54:46.490076   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 00:54:46.490083   16147 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 00:54:46.491469   16147 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-211632" context rescaled to 1 replicas
	I1026 00:54:46.492458   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 00:54:46.493298   16147 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:54:46.493352   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.494817   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 00:54:46.495257   16147 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 00:54:46.494867   16147 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1026 00:54:46.498490   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 00:54:46.496911   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 00:54:46.496971   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 00:54:46.496977   16147 out.go:177] * Verifying Kubernetes components...
	I1026 00:54:46.497181   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.500997   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 00:54:46.501052   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.501222   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.503033   16147 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 00:54:46.507121   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1026 00:54:46.507177   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.513076   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.513386   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 00:54:46.513437   16147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 00:54:46.515027   16147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 00:54:46.513440   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.517744   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.519004   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 00:54:46.519031   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 00:54:46.519105   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.523152   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.529352   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.542466   16147 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 00:54:46.542491   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 00:54:46.542547   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.546287   16147 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 00:54:46.548031   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.550889   16147 out.go:177]   - Using image docker.io/busybox:stable
	I1026 00:54:46.552539   16147 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 00:54:46.552560   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 00:54:46.552643   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:46.559993   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.560217   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.561796   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.562032   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.570340   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:46.574736   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	W1026 00:54:46.594081   16147 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 00:54:46.594115   16147 retry.go:31] will retry after 250.611729ms: ssh: handshake failed: EOF
	I1026 00:54:46.700596   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 00:54:46.701445   16147 node_ready.go:35] waiting up to 6m0s for node "addons-211632" to be "Ready" ...
	I1026 00:54:46.822006   16147 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1026 00:54:46.822033   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1026 00:54:46.899674   16147 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1026 00:54:46.899698   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1026 00:54:46.902140   16147 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 00:54:46.902163   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 00:54:46.910883   16147 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1026 00:54:46.910912   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1026 00:54:46.998400   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 00:54:47.006998   16147 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 00:54:47.007025   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 00:54:47.008563   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 00:54:47.015141   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 00:54:47.015168   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 00:54:47.092963   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 00:54:47.094088   16147 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 00:54:47.094139   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 00:54:47.101966   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 00:54:47.104583   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 00:54:47.105396   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1026 00:54:47.110740   16147 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1026 00:54:47.110802   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1026 00:54:47.190329   16147 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 00:54:47.190404   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 00:54:47.197067   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 00:54:47.307152   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 00:54:47.307180   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 00:54:47.310633   16147 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 00:54:47.310658   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 00:54:47.313347   16147 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 00:54:47.313374   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 00:54:47.391419   16147 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1026 00:54:47.391453   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1026 00:54:47.398825   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 00:54:47.407253   16147 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 00:54:47.407353   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 00:54:47.701204   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 00:54:47.701233   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 00:54:47.704878   16147 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 00:54:47.704907   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 00:54:47.707528   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 00:54:47.712859   16147 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1026 00:54:47.712882   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1026 00:54:47.991311   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 00:54:47.991393   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 00:54:48.007259   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 00:54:48.091233   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 00:54:48.091334   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 00:54:48.111028   16147 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1026 00:54:48.111118   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1026 00:54:48.311199   16147 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:54:48.311304   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 00:54:48.499749   16147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 00:54:48.499785   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 00:54:48.597971   16147 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 00:54:48.598010   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1026 00:54:48.612560   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:54:48.695179   16147 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.994537314s)
	I1026 00:54:48.695355   16147 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 00:54:48.896979   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:54:48.994257   16147 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1026 00:54:48.994281   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1026 00:54:49.007770   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 00:54:49.007795   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 00:54:49.305489   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1026 00:54:49.697581   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.699138405s)
	I1026 00:54:49.801589   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 00:54:49.801666   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 00:54:50.190441   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 00:54:50.190534   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 00:54:50.309908   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 00:54:50.309984   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 00:54:50.512804   16147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 00:54:50.512914   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 00:54:50.802735   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 00:54:51.413345   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:54:52.812931   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.804324112s)
	I1026 00:54:52.812967   16147 addons.go:467] Verifying addon ingress=true in "addons-211632"
	I1026 00:54:52.813007   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.719955426s)
	I1026 00:54:52.814554   16147 out.go:177] * Verifying ingress addon...
	I1026 00:54:52.813111   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.711052672s)
	I1026 00:54:52.813147   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.70849005s)
	I1026 00:54:52.813221   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.70775796s)
	I1026 00:54:52.813282   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.616135176s)
	I1026 00:54:52.813338   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.414472532s)
	I1026 00:54:52.813406   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.105840136s)
	I1026 00:54:52.813439   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.806108693s)
	I1026 00:54:52.813553   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.200964512s)
	I1026 00:54:52.813628   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.508104775s)
	I1026 00:54:52.815922   16147 addons.go:467] Verifying addon metrics-server=true in "addons-211632"
	I1026 00:54:52.815922   16147 addons.go:467] Verifying addon registry=true in "addons-211632"
	W1026 00:54:52.815936   16147 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 00:54:52.815957   16147 retry.go:31] will retry after 321.977796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 00:54:52.817494   16147 out.go:177] * Verifying registry addon...
	I1026 00:54:52.816690   16147 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 00:54:52.819694   16147 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 00:54:52.823744   16147 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 00:54:52.823761   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 00:54:52.826216   16147 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1026 00:54:52.827894   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:52.828304   16147 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 00:54:52.828371   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:52.893743   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:53.138254   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:54:53.245557   16147 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 00:54:53.245621   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:53.263481   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:53.332891   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:53.397983   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:53.408719   16147 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 00:54:53.490341   16147 addons.go:231] Setting addon gcp-auth=true in "addons-211632"
	I1026 00:54:53.490412   16147 host.go:66] Checking if "addons-211632" exists ...
	I1026 00:54:53.490937   16147 cli_runner.go:164] Run: docker container inspect addons-211632 --format={{.State.Status}}
	I1026 00:54:53.519262   16147 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 00:54:53.519314   16147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-211632
	I1026 00:54:53.536264   16147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/addons-211632/id_rsa Username:docker}
	I1026 00:54:53.621272   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.818467522s)
	I1026 00:54:53.621316   16147 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-211632"
	I1026 00:54:53.623201   16147 out.go:177] * Verifying csi-hostpath-driver addon...
	I1026 00:54:53.625373   16147 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 00:54:53.630362   16147 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 00:54:53.630384   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:53.633623   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:53.802069   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:54:53.832718   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:53.897962   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:54.105960   16147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1026 00:54:54.107537   16147 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1026 00:54:54.108953   16147 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 00:54:54.108970   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 00:54:54.124997   16147 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 00:54:54.125022   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 00:54:54.137590   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:54.140812   16147 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 00:54:54.140829   16147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1026 00:54:54.156250   16147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 00:54:54.332906   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:54.398605   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:54.696542   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:54.894050   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:54.898087   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:55.194814   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:55.393454   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:55.398261   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:55.694651   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:55.796916   16147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.640624765s)
	I1026 00:54:55.797902   16147 addons.go:467] Verifying addon gcp-auth=true in "addons-211632"
	I1026 00:54:55.800566   16147 out.go:177] * Verifying gcp-auth addon...
	I1026 00:54:55.802886   16147 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 00:54:55.805688   16147 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 00:54:55.805705   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:55.808077   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:55.892015   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:55.897841   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:56.137728   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:56.301742   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:54:56.312085   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:56.332333   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:56.398781   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:56.692424   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:56.812350   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:56.892989   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:56.898288   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:57.138395   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:57.312237   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:57.332105   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:57.398338   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:57.638618   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:57.811378   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:57.832801   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:57.899627   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:58.138807   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:58.301942   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:54:58.312087   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:58.332335   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:58.398033   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:58.637186   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:58.811318   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:58.832608   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:58.897539   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:59.138787   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:59.311402   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:59.332412   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:59.397230   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:54:59.637867   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:54:59.811461   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:54:59.832489   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:54:59.897091   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:00.138379   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:00.311246   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:00.332607   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:00.397252   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:00.637815   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:00.801200   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:00.811573   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:00.831570   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:00.897916   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:01.137335   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:01.311351   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:01.332360   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:01.397540   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:01.638102   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:01.811888   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:01.831894   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:01.897659   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:02.137317   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:02.311118   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:02.332167   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:02.397947   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:02.637217   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:02.810730   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:02.831773   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:02.897626   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:03.138633   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:03.301293   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:03.311646   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:03.331768   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:03.397578   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:03.640070   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:03.810823   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:03.831902   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:03.897622   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:04.137395   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:04.311421   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:04.332751   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:04.397978   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:04.637544   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:04.811304   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:04.832240   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:04.897017   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:05.137445   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:05.311192   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:05.332178   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:05.398220   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:05.637640   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:05.801406   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:05.810930   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:05.831995   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:05.897814   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:06.137379   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:06.311571   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:06.332776   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:06.397712   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:06.637219   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:06.811022   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:06.832048   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:06.897839   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:07.137650   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:07.311959   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:07.332019   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:07.397749   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:07.638883   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:07.811659   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:07.831730   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:07.897414   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:08.138052   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:08.301508   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:08.311211   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:08.332136   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:08.397958   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:08.637341   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:08.811293   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:08.832248   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:08.898001   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:09.137772   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:09.311543   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:09.331723   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:09.397759   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:09.638217   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:09.811080   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:09.832350   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:09.898271   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:10.138007   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:10.311079   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:10.332104   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:10.398064   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:10.637639   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:10.801220   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:10.811518   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:10.832369   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:10.897779   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:11.137144   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:11.311862   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:11.331978   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:11.397705   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:11.638148   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:11.811483   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:11.832891   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:11.897426   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:12.138071   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:12.311499   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:12.332709   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:12.397618   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:12.638731   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:12.811337   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:12.832432   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:12.897535   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:13.138141   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:13.300526   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:13.311162   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:13.332261   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:13.399945   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:13.637483   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:13.811424   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:13.832557   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:13.897384   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:14.138055   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:14.310987   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:14.332053   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:14.398183   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:14.637753   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:14.811756   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:14.831848   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:14.897977   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:15.137589   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:15.301206   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:15.312043   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:15.332219   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:15.398143   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:15.637780   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:15.811365   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:15.832645   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:15.897397   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:16.138073   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:16.310692   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:16.331726   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:16.397819   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:16.637503   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:16.811808   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:16.832043   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:16.897593   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:17.138276   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:17.311720   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:17.332445   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:17.398306   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:17.638481   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:17.800936   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:17.811485   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:17.832922   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:17.897845   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:18.137239   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:18.311244   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:18.332446   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:18.397345   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:18.638108   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:18.811330   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:18.832467   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:18.897253   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:19.137691   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:19.311568   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:19.331476   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:19.397393   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:19.637988   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:19.801619   16147 node_ready.go:58] node "addons-211632" has status "Ready":"False"
	I1026 00:55:19.811089   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:19.832271   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:19.898200   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:20.137776   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:20.311744   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:20.331700   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:20.397821   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:20.637425   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:20.811679   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:20.831639   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:20.900816   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:21.195069   16147 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 00:55:21.195099   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:21.301238   16147 node_ready.go:49] node "addons-211632" has status "Ready":"True"
	I1026 00:55:21.301263   16147 node_ready.go:38] duration metric: took 34.599788711s waiting for node "addons-211632" to be "Ready" ...
	I1026 00:55:21.301274   16147 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 00:55:21.310941   16147 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-htzfl" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:21.312856   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:21.331866   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:21.399119   16147 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 00:55:21.399147   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:21.640941   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:21.812077   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:21.832726   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:21.897938   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:22.139040   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:22.311931   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:22.333159   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:22.398611   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:22.639986   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:22.813321   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:22.826882   16147 pod_ready.go:92] pod "coredns-5dd5756b68-htzfl" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:22.826907   16147 pod_ready.go:81] duration metric: took 1.515937118s waiting for pod "coredns-5dd5756b68-htzfl" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.826928   16147 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.832597   16147 pod_ready.go:92] pod "etcd-addons-211632" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:22.832678   16147 pod_ready.go:81] duration metric: took 5.741638ms waiting for pod "etcd-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.832709   16147 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.833175   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:22.895318   16147 pod_ready.go:92] pod "kube-apiserver-addons-211632" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:22.895350   16147 pod_ready.go:81] duration metric: took 62.621257ms waiting for pod "kube-apiserver-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.895366   16147 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.899469   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:22.901020   16147 pod_ready.go:92] pod "kube-controller-manager-addons-211632" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:22.901044   16147 pod_ready.go:81] duration metric: took 5.668968ms waiting for pod "kube-controller-manager-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:22.901059   16147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5xv7d" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:23.138745   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:23.301178   16147 pod_ready.go:92] pod "kube-proxy-5xv7d" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:23.301200   16147 pod_ready.go:81] duration metric: took 400.133692ms waiting for pod "kube-proxy-5xv7d" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:23.301209   16147 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:23.311700   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:23.332413   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:23.398389   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:23.638555   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:23.701350   16147 pod_ready.go:92] pod "kube-scheduler-addons-211632" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:23.701377   16147 pod_ready.go:81] duration metric: took 400.16015ms waiting for pod "kube-scheduler-addons-211632" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:23.701392   16147 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-8pc98" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:23.813985   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:23.892569   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:23.898988   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:24.197241   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:24.311651   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:24.333393   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:24.399425   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:24.694368   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:24.812429   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:24.832703   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:24.898526   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:25.138974   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:25.311412   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:25.333455   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:25.398632   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:25.640949   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:25.815110   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:25.832943   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:25.898684   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:26.008051   16147 pod_ready.go:102] pod "metrics-server-7c66d45ddc-8pc98" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:26.138602   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:26.311333   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:26.332842   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:26.398264   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:26.639245   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:26.812338   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:26.832926   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:26.898458   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:27.139560   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:27.310986   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:27.333320   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:27.398691   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:27.640146   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:27.830440   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:27.833475   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:27.898550   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:28.138628   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:28.311950   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:28.332315   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:28.398881   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:28.507265   16147 pod_ready.go:102] pod "metrics-server-7c66d45ddc-8pc98" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:28.638573   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:28.811933   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:28.833052   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:28.899387   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:29.008903   16147 pod_ready.go:92] pod "metrics-server-7c66d45ddc-8pc98" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:29.008926   16147 pod_ready.go:81] duration metric: took 5.307527561s waiting for pod "metrics-server-7c66d45ddc-8pc98" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:29.008950   16147 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:29.138615   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:29.311985   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:29.332321   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:29.398799   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:29.639371   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:29.811379   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:29.832950   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:29.898754   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:30.138543   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:30.312497   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:30.333758   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:30.399177   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:30.639579   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:30.811622   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:30.832523   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:30.899016   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:31.025835   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:31.139661   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:31.312091   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:31.333036   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:31.398311   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:31.638833   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:31.810995   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:31.832087   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:31.898844   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:32.138111   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:32.312071   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:32.332314   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:32.399345   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:32.639599   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:32.811766   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:32.832311   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:32.898190   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:33.138991   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:33.311458   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:33.332519   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:33.400104   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:33.525662   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:33.638919   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:33.811209   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:33.832685   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:33.898628   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:34.138901   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:34.311374   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:34.333065   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:34.398636   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:34.638916   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:34.811312   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:34.832735   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:34.898073   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:35.137943   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:35.310924   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:35.332632   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:35.397939   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:35.527330   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:35.698915   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:35.812215   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:35.893696   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:35.898421   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:36.190973   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:36.312828   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:36.332584   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:36.398209   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:36.639072   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:36.811979   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:36.832321   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:36.899092   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:37.138522   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:37.311660   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:37.331940   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:37.397951   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:37.692348   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:37.814875   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:37.896136   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:37.901507   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:38.026530   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:38.139795   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:38.312746   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:38.333938   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:38.399613   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:38.695676   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:38.814104   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:38.832273   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:38.898719   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:39.140170   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:39.312339   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:39.333614   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:39.398866   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:39.639221   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:39.811737   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:39.833053   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:39.898420   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:40.139445   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:40.312107   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:40.333046   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:40.398126   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:40.526363   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:40.639459   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:40.812263   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:40.836319   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:40.899118   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:41.196421   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:41.312278   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:41.397946   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:41.401366   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:55:41.693640   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:41.811844   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:41.892245   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:41.898582   16147 kapi.go:107] duration metric: took 49.078885299s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 00:55:42.209535   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:42.311691   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:42.332030   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:42.640208   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:42.811036   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:42.832140   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:43.026261   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:43.139792   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:43.312234   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:43.333403   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:43.639532   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:43.811947   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:43.833135   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:44.140088   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:44.312458   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:44.332946   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:44.639679   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:44.812194   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:44.833236   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:45.026882   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:45.139761   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:45.311822   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:45.332337   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:45.638752   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:45.814027   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:45.832589   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:46.138311   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:46.312013   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:46.332384   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:46.639286   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:46.811164   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:46.832647   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:47.027940   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:47.138238   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:47.311799   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:47.332469   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:47.640018   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:47.811884   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:47.832087   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:48.139020   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:48.311059   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:48.333488   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:48.697545   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:48.812367   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:48.894553   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:49.103090   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:49.196341   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:49.312356   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:49.394180   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:49.697580   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:49.811956   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:49.832317   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:50.139406   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:50.311601   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:50.332894   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:50.694999   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:50.812381   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:50.832988   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:51.138557   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:51.312235   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:51.333438   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:51.526115   16147 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"False"
	I1026 00:55:51.639403   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:51.811471   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:51.833411   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:52.139833   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:52.311461   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:52.332578   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:52.640635   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:52.812104   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:52.833154   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:53.139289   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:53.311193   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:53.332353   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:53.639950   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:53.811084   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:53.834240   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:54.026835   16147 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace has status "Ready":"True"
	I1026 00:55:54.026866   16147 pod_ready.go:81] duration metric: took 25.01790697s waiting for pod "nvidia-device-plugin-daemonset-bbnbx" in "kube-system" namespace to be "Ready" ...
	I1026 00:55:54.026894   16147 pod_ready.go:38] duration metric: took 32.725606408s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 00:55:54.026914   16147 api_server.go:52] waiting for apiserver process to appear ...
	I1026 00:55:54.026942   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:55:54.027015   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:55:54.064025   16147 cri.go:89] found id: "4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:55:54.064045   16147 cri.go:89] found id: ""
	I1026 00:55:54.064055   16147 logs.go:284] 1 containers: [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3]
	I1026 00:55:54.064118   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.067239   16147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:55:54.067289   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:55:54.128528   16147 cri.go:89] found id: "9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:55:54.128547   16147 cri.go:89] found id: ""
	I1026 00:55:54.128554   16147 logs.go:284] 1 containers: [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96]
	I1026 00:55:54.128592   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.131837   16147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:55:54.131900   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:55:54.138605   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:54.211736   16147 cri.go:89] found id: "ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:55:54.211755   16147 cri.go:89] found id: ""
	I1026 00:55:54.211762   16147 logs.go:284] 1 containers: [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8]
	I1026 00:55:54.211804   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.215051   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:55:54.215107   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:55:54.250830   16147 cri.go:89] found id: "943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:55:54.250854   16147 cri.go:89] found id: ""
	I1026 00:55:54.250863   16147 logs.go:284] 1 containers: [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4]
	I1026 00:55:54.250904   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.293314   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:55:54.293385   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:55:54.312075   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:54.328347   16147 cri.go:89] found id: "7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:55:54.328371   16147 cri.go:89] found id: ""
	I1026 00:55:54.328380   16147 logs.go:284] 1 containers: [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7]
	I1026 00:55:54.328435   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.332397   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:55:54.332467   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 00:55:54.332594   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:54.404905   16147 cri.go:89] found id: "ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5"
	I1026 00:55:54.404935   16147 cri.go:89] found id: ""
	I1026 00:55:54.404944   16147 logs.go:284] 1 containers: [ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5]
	I1026 00:55:54.405005   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.409085   16147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:55:54.409136   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:55:54.444168   16147 cri.go:89] found id: "77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:55:54.444193   16147 cri.go:89] found id: ""
	I1026 00:55:54.444202   16147 logs.go:284] 1 containers: [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870]
	I1026 00:55:54.444249   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:54.447480   16147 logs.go:123] Gathering logs for kubelet ...
	I1026 00:55:54.447514   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:55:54.572073   16147 logs.go:123] Gathering logs for dmesg ...
	I1026 00:55:54.572107   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:55:54.583713   16147 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:55:54.583743   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:55:54.638288   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:54.722191   16147 logs.go:123] Gathering logs for kube-proxy [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7] ...
	I1026 00:55:54.722221   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:55:54.756120   16147 logs.go:123] Gathering logs for kube-controller-manager [ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5] ...
	I1026 00:55:54.756148   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5"
	I1026 00:55:54.811193   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:54.824875   16147 logs.go:123] Gathering logs for kindnet [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870] ...
	I1026 00:55:54.824915   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:55:54.833208   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:54.860545   16147 logs.go:123] Gathering logs for container status ...
	I1026 00:55:54.860579   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:55:54.901261   16147 logs.go:123] Gathering logs for kube-apiserver [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3] ...
	I1026 00:55:54.901287   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:55:54.947374   16147 logs.go:123] Gathering logs for etcd [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96] ...
	I1026 00:55:54.947412   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:55:54.989204   16147 logs.go:123] Gathering logs for coredns [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8] ...
	I1026 00:55:54.989244   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:55:55.034192   16147 logs.go:123] Gathering logs for kube-scheduler [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4] ...
	I1026 00:55:55.034239   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:55:55.125686   16147 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:55:55.125730   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:55:55.195764   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:55.313036   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:55.395671   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:55.695041   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:55.813228   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:55.895684   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:56.195417   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:56.311724   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:56.392309   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:56.694744   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:56.812450   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:56.833551   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:57.139220   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:57.312177   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:57.332687   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:57.640436   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:57.815178   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:57.832755   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:57.860222   16147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 00:55:57.902714   16147 api_server.go:72] duration metric: took 1m11.407706592s to wait for apiserver process to appear ...
	I1026 00:55:57.902741   16147 api_server.go:88] waiting for apiserver healthz status ...
	I1026 00:55:57.902773   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:55:57.902816   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:55:57.937651   16147 cri.go:89] found id: "4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:55:57.937697   16147 cri.go:89] found id: ""
	I1026 00:55:57.937710   16147 logs.go:284] 1 containers: [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3]
	I1026 00:55:57.937763   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:57.941505   16147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:55:57.941566   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:55:58.011525   16147 cri.go:89] found id: "9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:55:58.011553   16147 cri.go:89] found id: ""
	I1026 00:55:58.011563   16147 logs.go:284] 1 containers: [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96]
	I1026 00:55:58.011623   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.015378   16147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:55:58.015503   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:55:58.098159   16147 cri.go:89] found id: "ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:55:58.098180   16147 cri.go:89] found id: ""
	I1026 00:55:58.098189   16147 logs.go:284] 1 containers: [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8]
	I1026 00:55:58.098254   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.101548   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:55:58.101609   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:55:58.139454   16147 cri.go:89] found id: "943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:55:58.139477   16147 cri.go:89] found id: ""
	I1026 00:55:58.139487   16147 logs.go:284] 1 containers: [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4]
	I1026 00:55:58.139536   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.140054   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:58.142907   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:55:58.142964   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:55:58.211366   16147 cri.go:89] found id: "7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:55:58.211392   16147 cri.go:89] found id: ""
	I1026 00:55:58.211402   16147 logs.go:284] 1 containers: [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7]
	I1026 00:55:58.211455   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.214863   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:55:58.214935   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 00:55:58.250938   16147 cri.go:89] found id: "ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5"
	I1026 00:55:58.250955   16147 cri.go:89] found id: ""
	I1026 00:55:58.250962   16147 logs.go:284] 1 containers: [ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5]
	I1026 00:55:58.251001   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.290667   16147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:55:58.290733   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:55:58.312987   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:58.326956   16147 cri.go:89] found id: "77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:55:58.326980   16147 cri.go:89] found id: ""
	I1026 00:55:58.326989   16147 logs.go:284] 1 containers: [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870]
	I1026 00:55:58.327043   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:55:58.330940   16147 logs.go:123] Gathering logs for kube-controller-manager [ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5] ...
	I1026 00:55:58.330980   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5"
	I1026 00:55:58.333777   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:58.426597   16147 logs.go:123] Gathering logs for kindnet [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870] ...
	I1026 00:55:58.426636   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:55:58.463192   16147 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:55:58.463224   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:55:58.565297   16147 logs.go:123] Gathering logs for kubelet ...
	I1026 00:55:58.565327   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:55:58.640955   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:58.651879   16147 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:55:58.651911   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:55:58.807375   16147 logs.go:123] Gathering logs for etcd [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96] ...
	I1026 00:55:58.807421   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:55:58.811551   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:58.833355   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:58.853156   16147 logs.go:123] Gathering logs for kube-scheduler [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4] ...
	I1026 00:55:58.853190   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:55:58.926488   16147 logs.go:123] Gathering logs for kube-proxy [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7] ...
	I1026 00:55:58.926523   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:55:58.996170   16147 logs.go:123] Gathering logs for container status ...
	I1026 00:55:58.996201   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:55:59.038889   16147 logs.go:123] Gathering logs for dmesg ...
	I1026 00:55:59.038915   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:55:59.050977   16147 logs.go:123] Gathering logs for kube-apiserver [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3] ...
	I1026 00:55:59.051006   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:55:59.117287   16147 logs.go:123] Gathering logs for coredns [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8] ...
	I1026 00:55:59.117320   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:55:59.139511   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:59.311992   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:59.333106   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:55:59.640027   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:55:59.812170   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:55:59.833185   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:56:00.140032   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:00.312165   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:00.332833   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:56:00.639622   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:00.811449   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:00.833157   16147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:56:01.209946   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:01.311991   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:01.332602   16147 kapi.go:107] duration metric: took 1m8.515908463s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 00:56:01.639357   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:01.654742   16147 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 00:56:01.661289   16147 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 00:56:01.662530   16147 api_server.go:141] control plane version: v1.28.3
	I1026 00:56:01.662556   16147 api_server.go:131] duration metric: took 3.759808008s to wait for apiserver health ...
	I1026 00:56:01.662570   16147 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 00:56:01.662599   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:56:01.662656   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:56:01.708420   16147 cri.go:89] found id: "4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:56:01.708443   16147 cri.go:89] found id: ""
	I1026 00:56:01.708452   16147 logs.go:284] 1 containers: [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3]
	I1026 00:56:01.708505   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:01.711892   16147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:56:01.711956   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:56:01.748123   16147 cri.go:89] found id: "9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:56:01.748155   16147 cri.go:89] found id: ""
	I1026 00:56:01.748163   16147 logs.go:284] 1 containers: [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96]
	I1026 00:56:01.748217   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:01.792462   16147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:56:01.792532   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:56:01.811786   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:01.829183   16147 cri.go:89] found id: "ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:56:01.829207   16147 cri.go:89] found id: ""
	I1026 00:56:01.829218   16147 logs.go:284] 1 containers: [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8]
	I1026 00:56:01.829271   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:01.833261   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:56:01.833327   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:56:01.904436   16147 cri.go:89] found id: "943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:56:01.904461   16147 cri.go:89] found id: ""
	I1026 00:56:01.904470   16147 logs.go:284] 1 containers: [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4]
	I1026 00:56:01.904525   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:01.907731   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:56:01.907796   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:56:01.992517   16147 cri.go:89] found id: "7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:56:01.992541   16147 cri.go:89] found id: ""
	I1026 00:56:01.992551   16147 logs.go:284] 1 containers: [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7]
	I1026 00:56:01.992605   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:01.996423   16147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:56:01.996477   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	E1026 00:56:02.038275   16147 logs.go:281] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-10-26T00:56:02Z" level=fatal msg="unable to determine image API version: rpc error: code = Unknown desc = lstat /var/lib/containers/storage/overlay-images/.tmp-images.json582843453: no such file or directory"
	I1026 00:56:02.038304   16147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:56:02.038359   16147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:56:02.122940   16147 cri.go:89] found id: "77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:56:02.122964   16147 cri.go:89] found id: ""
	I1026 00:56:02.122974   16147 logs.go:284] 1 containers: [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870]
	I1026 00:56:02.123027   16147 ssh_runner.go:195] Run: which crictl
	I1026 00:56:02.126560   16147 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:56:02.126593   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:56:02.140334   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:02.314271   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:02.411486   16147 logs.go:123] Gathering logs for etcd [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96] ...
	I1026 00:56:02.411521   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96"
	I1026 00:56:02.500977   16147 logs.go:123] Gathering logs for kube-scheduler [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4] ...
	I1026 00:56:02.501011   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4"
	I1026 00:56:02.542289   16147 logs.go:123] Gathering logs for kube-proxy [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7] ...
	I1026 00:56:02.542317   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7"
	I1026 00:56:02.575852   16147 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:56:02.575880   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:56:02.639285   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:02.659701   16147 logs.go:123] Gathering logs for container status ...
	I1026 00:56:02.659745   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:56:02.714653   16147 logs.go:123] Gathering logs for kubelet ...
	I1026 00:56:02.714680   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:56:02.785958   16147 logs.go:123] Gathering logs for dmesg ...
	I1026 00:56:02.786000   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:56:02.798309   16147 logs.go:123] Gathering logs for kube-apiserver [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3] ...
	I1026 00:56:02.798343   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3"
	I1026 00:56:02.811965   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:56:02.913553   16147 logs.go:123] Gathering logs for coredns [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8] ...
	I1026 00:56:02.913624   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8"
	I1026 00:56:03.019201   16147 logs.go:123] Gathering logs for kindnet [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870] ...
	I1026 00:56:03.019240   16147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870"
	I1026 00:56:03.193392   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:03.311641   16147 kapi.go:107] duration metric: took 1m7.508753908s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 00:56:03.316986   16147 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-211632 cluster.
	I1026 00:56:03.319331   16147 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 00:56:03.321260   16147 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 00:56:03.639621   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:04.139892   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:04.639659   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:05.138490   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:05.647164   16147 system_pods.go:59] 19 kube-system pods found
	I1026 00:56:05.647273   16147 system_pods.go:61] "coredns-5dd5756b68-htzfl" [adda9bac-99f3-459c-a0d8-f314baef0ed1] Running
	I1026 00:56:05.647292   16147 system_pods.go:61] "csi-hostpath-attacher-0" [45ea8f81-1da5-4588-bf4a-2dd212359911] Running
	I1026 00:56:05.647326   16147 system_pods.go:61] "csi-hostpath-resizer-0" [cdc8bcef-d920-49e4-9263-b0c88c263c1a] Running
	I1026 00:56:05.647350   16147 system_pods.go:61] "csi-hostpathplugin-n8dsf" [e510ef5d-092d-4719-9579-047d99e0edb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 00:56:05.647367   16147 system_pods.go:61] "etcd-addons-211632" [08d2e2c9-7aba-4242-a61f-a0c94793f8bf] Running
	I1026 00:56:05.647383   16147 system_pods.go:61] "kindnet-x4r64" [59f20b0c-bba3-4aac-92c6-4f77be16eaf6] Running
	I1026 00:56:05.647414   16147 system_pods.go:61] "kube-apiserver-addons-211632" [832867a3-744f-44d5-8c10-af03b46048b9] Running
	I1026 00:56:05.647434   16147 system_pods.go:61] "kube-controller-manager-addons-211632" [d0b067ad-3b88-4b34-beec-e17e01d2956b] Running
	I1026 00:56:05.647450   16147 system_pods.go:61] "kube-ingress-dns-minikube" [aa4700ad-2b9b-40e1-91ea-7472194766c1] Running
	I1026 00:56:05.647468   16147 system_pods.go:61] "kube-proxy-5xv7d" [e5b7e0ed-0535-4795-9c45-22032cba4c2f] Running
	I1026 00:56:05.647499   16147 system_pods.go:61] "kube-scheduler-addons-211632" [80da97c4-dc62-4595-b830-ad23f164c0e2] Running
	I1026 00:56:05.647518   16147 system_pods.go:61] "metrics-server-7c66d45ddc-8pc98" [40138e51-703f-4aa0-b5ec-5392438b711d] Running
	I1026 00:56:05.647533   16147 system_pods.go:61] "nvidia-device-plugin-daemonset-bbnbx" [64d4d05e-0610-4bb6-a7cc-53da0eb05823] Running
	I1026 00:56:05.647547   16147 system_pods.go:61] "registry-proxy-q4wbt" [77c9316b-3c51-4ba8-8001-81a3132d7651] Running
	I1026 00:56:05.647561   16147 system_pods.go:61] "registry-svllb" [6462cf6d-b638-4950-bc58-6d40cfa1a9e9] Running
	I1026 00:56:05.647597   16147 system_pods.go:61] "snapshot-controller-58dbcc7b99-5jf5l" [1a8b4529-2794-410c-b66f-93a91079cc01] Running
	I1026 00:56:05.647612   16147 system_pods.go:61] "snapshot-controller-58dbcc7b99-jz6r4" [49c869b6-da41-4714-a8c2-69ed29cde96a] Running
	I1026 00:56:05.647626   16147 system_pods.go:61] "storage-provisioner" [cf750322-b255-47b0-98e6-02a90c8c805c] Running
	I1026 00:56:05.647640   16147 system_pods.go:61] "tiller-deploy-7b677967b9-gth4w" [d29c4ef2-76c0-4d9a-bf0f-ff117c9b1924] Running
	I1026 00:56:05.647669   16147 system_pods.go:74] duration metric: took 3.985090507s to wait for pod list to return data ...
	I1026 00:56:05.647692   16147 default_sa.go:34] waiting for default service account to be created ...
	I1026 00:56:05.650062   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:05.691477   16147 default_sa.go:45] found service account: "default"
	I1026 00:56:05.691561   16147 default_sa.go:55] duration metric: took 43.854191ms for default service account to be created ...
	I1026 00:56:05.691584   16147 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 00:56:05.702232   16147 system_pods.go:86] 19 kube-system pods found
	I1026 00:56:05.702257   16147 system_pods.go:89] "coredns-5dd5756b68-htzfl" [adda9bac-99f3-459c-a0d8-f314baef0ed1] Running
	I1026 00:56:05.702265   16147 system_pods.go:89] "csi-hostpath-attacher-0" [45ea8f81-1da5-4588-bf4a-2dd212359911] Running
	I1026 00:56:05.702271   16147 system_pods.go:89] "csi-hostpath-resizer-0" [cdc8bcef-d920-49e4-9263-b0c88c263c1a] Running
	I1026 00:56:05.702282   16147 system_pods.go:89] "csi-hostpathplugin-n8dsf" [e510ef5d-092d-4719-9579-047d99e0edb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 00:56:05.702290   16147 system_pods.go:89] "etcd-addons-211632" [08d2e2c9-7aba-4242-a61f-a0c94793f8bf] Running
	I1026 00:56:05.702297   16147 system_pods.go:89] "kindnet-x4r64" [59f20b0c-bba3-4aac-92c6-4f77be16eaf6] Running
	I1026 00:56:05.702303   16147 system_pods.go:89] "kube-apiserver-addons-211632" [832867a3-744f-44d5-8c10-af03b46048b9] Running
	I1026 00:56:05.702310   16147 system_pods.go:89] "kube-controller-manager-addons-211632" [d0b067ad-3b88-4b34-beec-e17e01d2956b] Running
	I1026 00:56:05.702317   16147 system_pods.go:89] "kube-ingress-dns-minikube" [aa4700ad-2b9b-40e1-91ea-7472194766c1] Running
	I1026 00:56:05.702323   16147 system_pods.go:89] "kube-proxy-5xv7d" [e5b7e0ed-0535-4795-9c45-22032cba4c2f] Running
	I1026 00:56:05.702335   16147 system_pods.go:89] "kube-scheduler-addons-211632" [80da97c4-dc62-4595-b830-ad23f164c0e2] Running
	I1026 00:56:05.702343   16147 system_pods.go:89] "metrics-server-7c66d45ddc-8pc98" [40138e51-703f-4aa0-b5ec-5392438b711d] Running
	I1026 00:56:05.702351   16147 system_pods.go:89] "nvidia-device-plugin-daemonset-bbnbx" [64d4d05e-0610-4bb6-a7cc-53da0eb05823] Running
	I1026 00:56:05.702357   16147 system_pods.go:89] "registry-proxy-q4wbt" [77c9316b-3c51-4ba8-8001-81a3132d7651] Running
	I1026 00:56:05.702362   16147 system_pods.go:89] "registry-svllb" [6462cf6d-b638-4950-bc58-6d40cfa1a9e9] Running
	I1026 00:56:05.702368   16147 system_pods.go:89] "snapshot-controller-58dbcc7b99-5jf5l" [1a8b4529-2794-410c-b66f-93a91079cc01] Running
	I1026 00:56:05.702374   16147 system_pods.go:89] "snapshot-controller-58dbcc7b99-jz6r4" [49c869b6-da41-4714-a8c2-69ed29cde96a] Running
	I1026 00:56:05.702379   16147 system_pods.go:89] "storage-provisioner" [cf750322-b255-47b0-98e6-02a90c8c805c] Running
	I1026 00:56:05.702385   16147 system_pods.go:89] "tiller-deploy-7b677967b9-gth4w" [d29c4ef2-76c0-4d9a-bf0f-ff117c9b1924] Running
	I1026 00:56:05.702393   16147 system_pods.go:126] duration metric: took 10.79503ms to wait for k8s-apps to be running ...
	I1026 00:56:05.702402   16147 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 00:56:05.702452   16147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 00:56:05.714764   16147 system_svc.go:56] duration metric: took 12.353716ms WaitForService to wait for kubelet.
	I1026 00:56:05.714793   16147 kubeadm.go:581] duration metric: took 1m19.219794144s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1026 00:56:05.714811   16147 node_conditions.go:102] verifying NodePressure condition ...
	I1026 00:56:05.717160   16147 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 00:56:05.717185   16147 node_conditions.go:123] node cpu capacity is 8
	I1026 00:56:05.717197   16147 node_conditions.go:105] duration metric: took 2.381657ms to run NodePressure ...
	I1026 00:56:05.717208   16147 start.go:228] waiting for startup goroutines ...
	I1026 00:56:06.138651   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:06.640521   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:07.139568   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:07.639262   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:08.138769   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:08.638739   16147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:56:09.139303   16147 kapi.go:107] duration metric: took 1m15.513926908s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 00:56:09.141514   16147 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, helm-tiller, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1026 00:56:09.143385   16147 addons.go:502] enable addons completed in 1m22.743503762s: enabled=[nvidia-device-plugin ingress-dns storage-provisioner helm-tiller inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1026 00:56:09.143427   16147 start.go:233] waiting for cluster config update ...
	I1026 00:56:09.143442   16147 start.go:242] writing updated cluster config ...
	I1026 00:56:09.143703   16147 ssh_runner.go:195] Run: rm -f paused
	I1026 00:56:09.192110   16147 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1026 00:56:09.194268   16147 out.go:177] * Done! kubectl is now configured to use "addons-211632" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 26 00:56:26 addons-211632 crio[949]: time="2023-10-26 00:56:26.060531780Z" level=info msg="Removed container 122e811ab4ab20aee30067b40ae9122951796beffafa44c94b6efac8cd7a34af: default/registry-test/registry-test" id=a19daac0-e646-454f-9b50-1d96bcf3ef3f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 00:56:26 addons-211632 crio[949]: time="2023-10-26 00:56:26.061850954Z" level=info msg="Removing container: 1b168f083fbf8f4a0a21df130e85bef0abb6540ede433ac733fc90f576ab55ce" id=aa2c3e80-3fed-49b0-ae5f-8714a6da52f1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 00:56:26 addons-211632 crio[949]: time="2023-10-26 00:56:26.100481404Z" level=info msg="Removed container 1b168f083fbf8f4a0a21df130e85bef0abb6540ede433ac733fc90f576ab55ce: kube-system/registry-svllb/registry" id=aa2c3e80-3fed-49b0-ae5f-8714a6da52f1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 00:56:26 addons-211632 crio[949]: time="2023-10-26 00:56:26.103832406Z" level=info msg="Removing container: 4515af3af50d3ad86589511b95dcbadd0588d17a3ddffc2939a6efa02780306f" id=87e2fe16-17ef-45d5-9e7a-c99f1c1e3c54 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 00:56:26 addons-211632 crio[949]: time="2023-10-26 00:56:26.121036394Z" level=info msg="Removed container 4515af3af50d3ad86589511b95dcbadd0588d17a3ddffc2939a6efa02780306f: kube-system/registry-proxy-q4wbt/registry-proxy" id=87e2fe16-17ef-45d5-9e7a-c99f1c1e3c54 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.454159447Z" level=info msg="Stopping container: ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6 (timeout: 30s)" id=5fba825e-a6db-4d33-8f44-ed3b62c90ae6 name=/runtime.v1.RuntimeService/StopContainer
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.756540364Z" level=info msg="Running pod sandbox: default/nginx/POD" id=cff1edaa-7b64-4ff5-8a3c-4d374673741e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.756608017Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.779625814Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:cddc7abfc7bdd956849e0a40e286bb670261dece6dcbd75fab777f8d54fceecb UID:423e7dea-1b3a-4901-936b-1665d482b775 NetNS:/var/run/netns/617cf6c5-53d9-4852-9c30-b640e553430b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.779671553Z" level=info msg="Adding pod default_nginx to CNI network \"kindnet\" (type=ptp)"
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.788561190Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:cddc7abfc7bdd956849e0a40e286bb670261dece6dcbd75fab777f8d54fceecb UID:423e7dea-1b3a-4901-936b-1665d482b775 NetNS:/var/run/netns/617cf6c5-53d9-4852-9c30-b640e553430b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.788682529Z" level=info msg="Checking pod default_nginx for CNI network kindnet (type=ptp)"
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.808056818Z" level=info msg="Ran pod sandbox cddc7abfc7bdd956849e0a40e286bb670261dece6dcbd75fab777f8d54fceecb with infra container: default/nginx/POD" id=cff1edaa-7b64-4ff5-8a3c-4d374673741e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.809113991Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=07551a1d-343c-4f75-93b0-7de070e5cb36 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.809299411Z" level=info msg="Image docker.io/nginx:alpine not found" id=07551a1d-343c-4f75-93b0-7de070e5cb36 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.810044517Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=6b85d88f-4ea0-4110-a7f8-9352804f08d5 name=/runtime.v1.ImageService/PullImage
	Oct 26 00:56:29 addons-211632 crio[949]: time="2023-10-26 00:56:29.827013288Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 26 00:56:30 addons-211632 crio[949]: time="2023-10-26 00:56:30.381306920Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 26 00:56:30 addons-211632 crio[949]: time="2023-10-26 00:56:30.647531397Z" level=info msg="Stopped container ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6: kube-system/metrics-server-7c66d45ddc-8pc98/metrics-server" id=5fba825e-a6db-4d33-8f44-ed3b62c90ae6 name=/runtime.v1.RuntimeService/StopContainer
	Oct 26 00:56:30 addons-211632 crio[949]: time="2023-10-26 00:56:30.648052727Z" level=info msg="Stopping pod sandbox: 7bd0a3d4a2a54ae538f02aba24af6bbd2a1ffde9c2737b49b4b1ecf807ed6c93" id=ebce4f9a-0f0f-4634-bad6-a69a40ab9cb9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 00:56:30 addons-211632 crio[949]: time="2023-10-26 00:56:30.648276695Z" level=info msg="Got pod network &{Name:metrics-server-7c66d45ddc-8pc98 Namespace:kube-system ID:7bd0a3d4a2a54ae538f02aba24af6bbd2a1ffde9c2737b49b4b1ecf807ed6c93 UID:40138e51-703f-4aa0-b5ec-5392438b711d NetNS:/var/run/netns/ee8fa8a2-08c1-41eb-ba35-8e65dcf0d25d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 26 00:56:30 addons-211632 crio[949]: time="2023-10-26 00:56:30.648396212Z" level=info msg="Deleting pod kube-system_metrics-server-7c66d45ddc-8pc98 from CNI network \"kindnet\" (type=ptp)"
	Oct 26 00:56:30 addons-211632 crio[949]: time="2023-10-26 00:56:30.674946732Z" level=info msg="Stopped pod sandbox: 7bd0a3d4a2a54ae538f02aba24af6bbd2a1ffde9c2737b49b4b1ecf807ed6c93" id=ebce4f9a-0f0f-4634-bad6-a69a40ab9cb9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 00:56:31 addons-211632 crio[949]: time="2023-10-26 00:56:31.062144183Z" level=info msg="Removing container: ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6" id=ae3fc0fd-5c3c-4609-bbff-3fb555d098cc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 00:56:31 addons-211632 crio[949]: time="2023-10-26 00:56:31.078769895Z" level=info msg="Removed container ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6: kube-system/metrics-server-7c66d45ddc-8pc98/metrics-server" id=ae3fc0fd-5c3c-4609-bbff-3fb555d098cc name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	ff89691910760       ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4                                        10 seconds ago       Running             headlamp                                 0                   53c88b32f3e8f       headlamp-94b766c-l89p5
	f8b4a1620ef2d       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             12 seconds ago       Exited              helper-pod                               0                   dcbb4a34ec6e4       helper-pod-delete-pvc-12cb842a-8d18-426c-8f30-ad9da7858417
	e27a4a51e5a11       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            14 seconds ago       Exited              busybox                                  0                   3ef8e288118de       test-local-path
	10012fd4061f3       docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                16 seconds ago       Exited              helm-test                                0                   ecd14190fb9ad       helm-test
	d7ca7ba6a2b0b       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            20 seconds ago       Exited              helper-pod                               0                   10d2dc28c79b4       helper-pod-create-pvc-12cb842a-8d18-426c-8f30-ad9da7858417
	d7d8e65870cd6       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          24 seconds ago       Running             csi-snapshotter                          0                   9c67bbe34f26d       csi-hostpathplugin-n8dsf
	f609fce9db5b5       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          25 seconds ago       Running             csi-provisioner                          0                   9c67bbe34f26d       csi-hostpathplugin-n8dsf
	0bda4f6f03e2d       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            27 seconds ago       Running             liveness-probe                           0                   9c67bbe34f26d       csi-hostpathplugin-n8dsf
	f2bf94ba5fc7a       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           28 seconds ago       Running             hostpath                                 0                   9c67bbe34f26d       csi-hostpathplugin-n8dsf
	13e3200c626d6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                29 seconds ago       Running             node-driver-registrar                    0                   9c67bbe34f26d       csi-hostpathplugin-n8dsf
	aa2cbceabb7ce       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                                 30 seconds ago       Running             gcp-auth                                 0                   36a1c0cdb3e72       gcp-auth-d4c87556c-kdp8b
	c6bc9dc3e219a       registry.k8s.io/ingress-nginx/controller@sha256:2648554ee53ec65a6095e00a53c89efae60aa21086733cdf56ae05e8f8546788                             32 seconds ago       Running             controller                               0                   d9bc4d9e7295f       ingress-nginx-controller-6f48fc54bd-cmvhx
	06856f54c677c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   38 seconds ago       Running             csi-external-health-monitor-controller   0                   9c67bbe34f26d       csi-hostpathplugin-n8dsf
	7f59d579dfbbb       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              46 seconds ago       Running             csi-resizer                              0                   d7a7d40469ab6       csi-hostpath-resizer-0
	ebd24d2a6abd5       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             47 seconds ago       Running             minikube-ingress-dns                     0                   7dfb36d6addcf       kube-ingress-dns-minikube
	8c23e790af22a       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                                             54 seconds ago       Exited              patch                                    2                   b513ae501c505       gcp-auth-certs-patch-2b46c
	5e50da28f20f4       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      55 seconds ago       Running             volume-snapshot-controller               0                   06b38af9a0d80       snapshot-controller-58dbcc7b99-5jf5l
	4b0c81db6d156       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   55 seconds ago       Exited              create                                   0                   625323f67c929       gcp-auth-certs-create-hmhh2
	2ab19f865bd4d       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             58 seconds ago       Running             local-path-provisioner                   0                   e6b41aa780033       local-path-provisioner-78b46b4d5c-lgg9l
	744d736896d35       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   About a minute ago   Exited              patch                                    0                   0d22131f3fa7b       ingress-nginx-admission-patch-b2ksj
	2ab20413c9c23       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   d62b7dec09f1e       snapshot-controller-58dbcc7b99-jz6r4
	14f64f45f19a0       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   6bd3f88e598f1       csi-hostpath-attacher-0
	d578e99c3ac89       gcr.io/cloud-spanner-emulator/emulator@sha256:07e8839e7fa1851ac9113295bc6534ead5c151f68bf7d47bd7e00af0c5948931                               About a minute ago   Running             cloud-spanner-emulator                   0                   9b0dbfa67621e       cloud-spanner-emulator-56665cdfc-qtjfd
	0652da13c33d3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   About a minute ago   Exited              create                                   0                   41a359a517a91       ingress-nginx-admission-create-j8kjk
	ec3c9ea204187       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             About a minute ago   Running             coredns                                  0                   6c07f9edd294e       coredns-5dd5756b68-htzfl
	f8417ef628e7e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   adb4d2166d394       storage-provisioner
	dd7ffca93132d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21                            About a minute ago   Running             gadget                                   0                   de47483b8f1d6       gadget-mlbw5
	7b07cef64940d       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                                             About a minute ago   Running             kube-proxy                               0                   de6ea2d92f16d       kube-proxy-5xv7d
	77f5574833c79       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                                             About a minute ago   Running             kindnet-cni                              0                   bad9c2696b3f0       kindnet-x4r64
	943e6b682f7fd       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                                             2 minutes ago        Running             kube-scheduler                           0                   1ca61355151c1       kube-scheduler-addons-211632
	ce5348c34feb4       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                                             2 minutes ago        Running             kube-controller-manager                  0                   ed38d93cd2c4c       kube-controller-manager-addons-211632
	9dd08b1c15310       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             2 minutes ago        Running             etcd                                     0                   79a44e2124ad2       etcd-addons-211632
	4a7726545672e       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                                             2 minutes ago        Running             kube-apiserver                           0                   d1f16b8217f9d       kube-apiserver-addons-211632
	
	* 
	* ==> coredns [ec3c9ea2041870efb48a938e4af2e89dd331147c012fa336b9d224dc6a6828b8] <==
	* [INFO] 10.244.0.13:42205 - 63673 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085703s
	[INFO] 10.244.0.13:54583 - 22076 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004629801s
	[INFO] 10.244.0.13:54583 - 32314 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.006493897s
	[INFO] 10.244.0.13:50787 - 2257 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004906927s
	[INFO] 10.244.0.13:50787 - 44758 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007034555s
	[INFO] 10.244.0.13:43236 - 19161 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005233139s
	[INFO] 10.244.0.13:43236 - 14052 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006922037s
	[INFO] 10.244.0.13:37775 - 51588 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113565s
	[INFO] 10.244.0.13:37775 - 42904 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137415s
	[INFO] 10.244.0.20:58541 - 1726 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000156687s
	[INFO] 10.244.0.20:38933 - 50255 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000158372s
	[INFO] 10.244.0.20:37661 - 19246 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096492s
	[INFO] 10.244.0.20:48536 - 52729 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000055415s
	[INFO] 10.244.0.20:58069 - 35118 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000144905s
	[INFO] 10.244.0.20:39768 - 47833 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00019479s
	[INFO] 10.244.0.20:46518 - 59515 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008099971s
	[INFO] 10.244.0.20:56334 - 31948 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008617315s
	[INFO] 10.244.0.20:48489 - 49030 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007890011s
	[INFO] 10.244.0.20:33050 - 19858 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008717875s
	[INFO] 10.244.0.20:54689 - 65002 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00574047s
	[INFO] 10.244.0.20:37119 - 49159 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006584873s
	[INFO] 10.244.0.20:37407 - 28665 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000780807s
	[INFO] 10.244.0.20:41293 - 34935 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001044211s
	[INFO] 10.244.0.25:60003 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000188839s
	[INFO] 10.244.0.25:58065 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000206742s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-211632
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-211632
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af1d352f1030f8f3ea7f97e311e7fe82ef319942
	                    minikube.k8s.io/name=addons-211632
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_26T00_54_34_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-211632
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-211632"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 00:54:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-211632
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 00:56:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 00:56:05 +0000   Thu, 26 Oct 2023 00:54:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 00:56:05 +0000   Thu, 26 Oct 2023 00:54:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 00:56:05 +0000   Thu, 26 Oct 2023 00:54:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 00:56:05 +0000   Thu, 26 Oct 2023 00:55:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-211632
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8e920dba5e44f52a984b4c201bc4d03
	  System UUID:                526ff319-72ea-4404-bea3-b50b59b7015d
	  Boot ID:                    37a42525-bdda-4c41-ac15-6bc286a851a0
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-56665cdfc-qtjfd       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  gadget                      gadget-mlbw5                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  gcp-auth                    gcp-auth-d4c87556c-kdp8b                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  headlamp                    headlamp-94b766c-l89p5                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  ingress-nginx               ingress-nginx-controller-6f48fc54bd-cmvhx    100m (1%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         101s
	  kube-system                 coredns-5dd5756b68-htzfl                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     107s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 csi-hostpathplugin-n8dsf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 etcd-addons-211632                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kindnet-x4r64                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      107s
	  kube-system                 kube-apiserver-addons-211632                 250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-addons-211632        200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-5xv7d                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-addons-211632                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 snapshot-controller-58dbcc7b99-5jf5l         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 snapshot-controller-58dbcc7b99-jz6r4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  local-path-storage          local-path-provisioner-78b46b4d5c-lgg9l      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             310Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node addons-211632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node addons-211632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node addons-211632 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node addons-211632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node addons-211632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node addons-211632 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           108s                 node-controller  Node addons-211632 event: Registered Node addons-211632 in Controller
	  Normal  NodeReady                73s                  kubelet          Node addons-211632 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Oct26 00:17]  #2
	[  +0.002625]  #3
	[  +0.002068]  #4
	[  +0.002928] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.002834] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.003988] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.003065]  #5
	[  +0.002076]  #6
	[  +0.002976]  #7
	[  +0.087157] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.807318] i8042: Warning: Keylock active
	[  +0.015363] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.007799] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001950] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001726] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001494] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.010141] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.002130] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001430] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001775] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001280] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +4.560947] kauditd_printk_skb: 32 callbacks suppressed
	
	* 
	* ==> etcd [9dd08b1c15310e3e02aa83c8c9360860fc2c01140355df46644f94326bfa6a96] <==
	* {"level":"info","ts":"2023-10-26T00:54:29.007357Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-26T00:54:29.007776Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:54:29.007918Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:54:29.007955Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:54:29.008309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-26T00:54:29.00838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2023-10-26T00:54:49.293701Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.882284ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128024712627227571 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/expand-controller\" mod_revision:321 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/expand-controller\" value_size:134 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/expand-controller\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-26T00:54:49.294569Z","caller":"traceutil/trace.go:171","msg":"trace[891182157] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"191.240282ms","start":"2023-10-26T00:54:49.10331Z","end":"2023-10-26T00:54:49.29455Z","steps":["trace[891182157] 'process raft request'  (duration: 86.613534ms)","trace[891182157] 'compare'  (duration: 102.642788ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-26T00:54:49.613293Z","caller":"traceutil/trace.go:171","msg":"trace[1254569783] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"207.15739ms","start":"2023-10-26T00:54:49.406114Z","end":"2023-10-26T00:54:49.613271Z","steps":["trace[1254569783] 'process raft request'  (duration: 186.449248ms)","trace[1254569783] 'compare'  (duration: 20.281799ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-26T00:54:50.294044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.383136ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128024712627227585 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:421 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3174 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-26T00:54:50.302429Z","caller":"traceutil/trace.go:171","msg":"trace[1076541632] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"295.158107ms","start":"2023-10-26T00:54:50.007243Z","end":"2023-10-26T00:54:50.302401Z","steps":["trace[1076541632] 'process raft request'  (duration: 93.334204ms)","trace[1076541632] 'compare'  (duration: 193.05003ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-26T00:54:50.308648Z","caller":"traceutil/trace.go:171","msg":"trace[176108853] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"200.295573ms","start":"2023-10-26T00:54:50.1083Z","end":"2023-10-26T00:54:50.308596Z","steps":["trace[176108853] 'process raft request'  (duration: 186.216498ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:54:50.30882Z","caller":"traceutil/trace.go:171","msg":"trace[1239667156] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"300.138595ms","start":"2023-10-26T00:54:50.008658Z","end":"2023-10-26T00:54:50.308797Z","steps":["trace[1239667156] 'process raft request'  (duration: 285.800305ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-26T00:54:50.39364Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-26T00:54:50.008641Z","time spent":"381.419084ms","remote":"127.0.0.1:45066","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":605,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/kindnet\" mod_revision:284 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/kindnet\" value_size:552 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/kindnet\" > >"}
	{"level":"warn","ts":"2023-10-26T00:54:51.410206Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.508205ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-211632\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-10-26T00:54:51.410278Z","caller":"traceutil/trace.go:171","msg":"trace[1190423287] range","detail":"{range_begin:/registry/minions/addons-211632; range_end:; response_count:1; response_revision:496; }","duration":"102.5983ms","start":"2023-10-26T00:54:51.307666Z","end":"2023-10-26T00:54:51.410265Z","steps":["trace[1190423287] 'range keys from in-memory index tree'  (duration: 102.427492ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:54:51.410669Z","caller":"traceutil/trace.go:171","msg":"trace[862519116] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"102.864159ms","start":"2023-10-26T00:54:51.307795Z","end":"2023-10-26T00:54:51.410659Z","steps":["trace[862519116] 'compare'  (duration: 97.661345ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:54:51.493793Z","caller":"traceutil/trace.go:171","msg":"trace[803885448] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"103.550887ms","start":"2023-10-26T00:54:51.390225Z","end":"2023-10-26T00:54:51.493775Z","steps":["trace[803885448] 'process raft request'  (duration: 103.044817ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:54:51.494043Z","caller":"traceutil/trace.go:171","msg":"trace[410789711] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"103.671161ms","start":"2023-10-26T00:54:51.390354Z","end":"2023-10-26T00:54:51.494025Z","steps":["trace[410789711] 'process raft request'  (duration: 102.996423ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:54:51.494663Z","caller":"traceutil/trace.go:171","msg":"trace[2067660786] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"104.186071ms","start":"2023-10-26T00:54:51.390463Z","end":"2023-10-26T00:54:51.494649Z","steps":["trace[2067660786] 'process raft request'  (duration: 102.928346ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-26T00:54:51.49467Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.574299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2023-10-26T00:54:51.494996Z","caller":"traceutil/trace.go:171","msg":"trace[1255900101] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:505; }","duration":"104.910171ms","start":"2023-10-26T00:54:51.390074Z","end":"2023-10-26T00:54:51.494984Z","steps":["trace[1255900101] 'agreement among raft nodes before linearized reading'  (duration: 104.526989ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:56:01.048109Z","caller":"traceutil/trace.go:171","msg":"trace[725678898] transaction","detail":"{read_only:false; response_revision:1103; number_of_response:1; }","duration":"133.406216ms","start":"2023-10-26T00:56:00.914687Z","end":"2023-10-26T00:56:01.048093Z","steps":["trace[725678898] 'process raft request'  (duration: 133.286711ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-26T00:56:01.205833Z","caller":"traceutil/trace.go:171","msg":"trace[1924785532] transaction","detail":"{read_only:false; response_revision:1106; number_of_response:1; }","duration":"153.4687ms","start":"2023-10-26T00:56:01.052341Z","end":"2023-10-26T00:56:01.205809Z","steps":["trace[1924785532] 'process raft request'  (duration: 90.352993ms)","trace[1924785532] 'compare'  (duration: 62.89638ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-26T00:56:01.205844Z","caller":"traceutil/trace.go:171","msg":"trace[1607229179] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"153.434448ms","start":"2023-10-26T00:56:01.052399Z","end":"2023-10-26T00:56:01.205833Z","steps":["trace[1607229179] 'process raft request'  (duration: 153.326812ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [aa2cbceabb7ceee001867afe8f27fa7f0add28f9d24c7f21114c0ecadf512cb8] <==
	* 2023/10/26 00:56:02 GCP Auth Webhook started!
	2023/10/26 00:56:09 Ready to marshal response ...
	2023/10/26 00:56:09 Ready to write response ...
	2023/10/26 00:56:09 Ready to marshal response ...
	2023/10/26 00:56:09 Ready to write response ...
	2023/10/26 00:56:14 Ready to marshal response ...
	2023/10/26 00:56:14 Ready to write response ...
	2023/10/26 00:56:15 Ready to marshal response ...
	2023/10/26 00:56:15 Ready to write response ...
	2023/10/26 00:56:15 Ready to marshal response ...
	2023/10/26 00:56:15 Ready to write response ...
	2023/10/26 00:56:15 Ready to marshal response ...
	2023/10/26 00:56:15 Ready to write response ...
	2023/10/26 00:56:19 Ready to marshal response ...
	2023/10/26 00:56:19 Ready to write response ...
	2023/10/26 00:56:20 Ready to marshal response ...
	2023/10/26 00:56:20 Ready to write response ...
	2023/10/26 00:56:29 Ready to marshal response ...
	2023/10/26 00:56:29 Ready to write response ...
	2023/10/26 00:56:33 Ready to marshal response ...
	2023/10/26 00:56:33 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  00:56:33 up 38 min,  0 users,  load average: 2.01, 0.93, 0.37
	Linux addons-211632 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [77f5574833c79f17c76feaf56b853a4342b58c235340ef89f64bba26a7d6d870] <==
	* I1026 00:54:47.806844       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1026 00:54:47.892157       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1026 00:54:47.892325       1 main.go:116] setting mtu 1500 for CNI 
	I1026 00:54:47.892352       1 main.go:146] kindnetd IP family: "ipv4"
	I1026 00:54:47.892383       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1026 00:55:20.422713       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1026 00:55:20.430024       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:55:20.430065       1 main.go:227] handling current node
	I1026 00:55:30.493581       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:55:30.493609       1 main.go:227] handling current node
	I1026 00:55:40.505133       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:55:40.505157       1 main.go:227] handling current node
	I1026 00:55:50.517005       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:55:50.517041       1 main.go:227] handling current node
	I1026 00:56:00.528877       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:56:00.528904       1 main.go:227] handling current node
	I1026 00:56:10.533111       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:56:10.533135       1 main.go:227] handling current node
	I1026 00:56:20.594238       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:56:20.594608       1 main.go:227] handling current node
	I1026 00:56:30.606611       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 00:56:30.606635       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [4a7726545672e0e1bce296d535baa0dbb287da750c1fecfc4e980fb47db3b6b3] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1026 00:54:53.412001       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.101.200.168"}
	I1026 00:54:53.499953       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I1026 00:54:53.602006       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.101.213.176"}
	W1026 00:54:54.306844       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1026 00:54:55.421723       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.102.24.227"}
	W1026 00:55:20.855541       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.24.227:443: connect: connection refused
	E1026 00:55:20.855569       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.24.227:443: connect: connection refused
	W1026 00:55:20.855826       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.24.227:443: connect: connection refused
	E1026 00:55:20.855851       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.24.227:443: connect: connection refused
	W1026 00:55:20.899280       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.24.227:443: connect: connection refused
	E1026 00:55:20.899323       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.24.227:443: connect: connection refused
	E1026 00:55:28.669150       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.229.18:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.229.18:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.229.18:443: connect: connection refused
	W1026 00:55:28.669164       1 handler_proxy.go:93] no RequestInfo found in the context
	E1026 00:55:28.669226       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1026 00:55:28.681082       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1026 00:55:28.695675       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1026 00:55:30.346725       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1026 00:56:15.532932       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.205.169"}
	E1026 00:56:16.731158       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.22:43364: read: connection reset by peer
	I1026 00:56:29.209282       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1026 00:56:29.510021       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.0.125"}
	I1026 00:56:29.682598       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	* 
	* ==> kube-controller-manager [ce5348c34feb4c043c46d6ec80097165c6c347bb62987af2221ed402e15afec5] <==
	* I1026 00:56:09.013730       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1026 00:56:09.040665       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1026 00:56:09.786510       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1026 00:56:09.973940       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1026 00:56:09.973981       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1026 00:56:10.007937       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1026 00:56:10.028864       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1026 00:56:14.815258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="10.450968ms"
	I1026 00:56:14.815400       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="85.555µs"
	I1026 00:56:15.548315       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-94b766c to 1"
	I1026 00:56:15.592183       1 event.go:307] "Event occurred" object="headlamp/headlamp-94b766c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-94b766c-l89p5"
	I1026 00:56:15.602287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-94b766c" duration="54.748309ms"
	I1026 00:56:15.607478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-94b766c" duration="5.139238ms"
	I1026 00:56:15.607657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-94b766c" duration="67.352µs"
	I1026 00:56:15.611985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-94b766c" duration="39.844µs"
	I1026 00:56:18.517861       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="10.332µs"
	I1026 00:56:21.219342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="7.052µs"
	I1026 00:56:23.075941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-94b766c" duration="100.129µs"
	I1026 00:56:23.292845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-94b766c" duration="43.801146ms"
	I1026 00:56:23.292987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-94b766c" duration="84.659µs"
	I1026 00:56:25.742561       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="13.914µs"
	I1026 00:56:29.442893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="9.487µs"
	I1026 00:56:30.153262       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1026 00:56:30.332041       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1026 00:56:33.094857       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	* 
	* ==> kube-proxy [7b07cef64940d835fcfa592d994904e5ac74d1f697e0c012fbee3686ca594dc7] <==
	* I1026 00:54:48.512806       1 server_others.go:69] "Using iptables proxy"
	I1026 00:54:48.712479       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1026 00:54:50.810582       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 00:54:50.895332       1 server_others.go:152] "Using iptables Proxier"
	I1026 00:54:50.895455       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 00:54:50.895513       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 00:54:50.895581       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 00:54:50.895829       1 server.go:846] "Version info" version="v1.28.3"
	I1026 00:54:50.896070       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:54:50.896917       1 config.go:188] "Starting service config controller"
	I1026 00:54:50.897013       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 00:54:50.897081       1 config.go:97] "Starting endpoint slice config controller"
	I1026 00:54:50.897126       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 00:54:50.897729       1 config.go:315] "Starting node config controller"
	I1026 00:54:50.897798       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 00:54:50.997389       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 00:54:50.997452       1 shared_informer.go:318] Caches are synced for service config
	I1026 00:54:50.997859       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [943e6b682f7fd08720ef8644e4e1786ce3ebc489950b927699e9a294a1634fd4] <==
	* W1026 00:54:30.509691       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 00:54:30.510202       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1026 00:54:30.509764       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 00:54:30.510217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1026 00:54:30.509822       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 00:54:30.510232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 00:54:30.509887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 00:54:30.510248       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1026 00:54:30.510328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 00:54:30.510341       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1026 00:54:30.510478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1026 00:54:30.510520       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1026 00:54:31.324635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 00:54:31.324669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 00:54:31.368981       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 00:54:31.369013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1026 00:54:31.476556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 00:54:31.476599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 00:54:31.507261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 00:54:31.507302       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1026 00:54:31.514477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1026 00:54:31.514510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1026 00:54:31.526718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 00:54:31.526749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1026 00:54:31.902593       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 26 00:56:30 addons-211632 kubelet[1560]: I1026 00:56:30.799668    1560 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40138e51-703f-4aa0-b5ec-5392438b711d-kube-api-access-lbxtk" (OuterVolumeSpecName: "kube-api-access-lbxtk") pod "40138e51-703f-4aa0-b5ec-5392438b711d" (UID: "40138e51-703f-4aa0-b5ec-5392438b711d"). InnerVolumeSpecName "kube-api-access-lbxtk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 26 00:56:30 addons-211632 kubelet[1560]: I1026 00:56:30.899020    1560 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lbxtk\" (UniqueName: \"kubernetes.io/projected/40138e51-703f-4aa0-b5ec-5392438b711d-kube-api-access-lbxtk\") on node \"addons-211632\" DevicePath \"\""
	Oct 26 00:56:30 addons-211632 kubelet[1560]: I1026 00:56:30.899064    1560 reconciler_common.go:300] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/40138e51-703f-4aa0-b5ec-5392438b711d-tmp-dir\") on node \"addons-211632\" DevicePath \"\""
	Oct 26 00:56:31 addons-211632 kubelet[1560]: I1026 00:56:31.061142    1560 scope.go:117] "RemoveContainer" containerID="ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6"
	Oct 26 00:56:31 addons-211632 kubelet[1560]: I1026 00:56:31.079026    1560 scope.go:117] "RemoveContainer" containerID="ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6"
	Oct 26 00:56:31 addons-211632 kubelet[1560]: E1026 00:56:31.079449    1560 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6\": container with ID starting with ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6 not found: ID does not exist" containerID="ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6"
	Oct 26 00:56:31 addons-211632 kubelet[1560]: I1026 00:56:31.079500    1560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6"} err="failed to get container status \"ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6\": rpc error: code = NotFound desc = could not find container \"ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6\": container with ID starting with ca2d34327e3029ca56f3475279a47a4682826986be5d1dfb3ce769696e739ed6 not found: ID does not exist"
	Oct 26 00:56:31 addons-211632 kubelet[1560]: I1026 00:56:31.309941    1560 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="40138e51-703f-4aa0-b5ec-5392438b711d" path="/var/lib/kubelet/pods/40138e51-703f-4aa0-b5ec-5392438b711d/volumes"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: I1026 00:56:33.302423    1560 scope.go:117] "RemoveContainer" containerID="10012fd4061f3236f49538f1dc00d6486dd341a20b5c6e3e596fb5b36c7f5026"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: I1026 00:56:33.426436    1560 scope.go:117] "RemoveContainer" containerID="d7ca7ba6a2b0bb0f1d61b1e739d5bf6c9ba198210d930a2bd2cde30cdac2543f"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: E1026 00:56:33.445986    1560 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a20aaccf17bbbb89f24b27f29dcaf4c2f1d882d61a0a3dbe35ce19b8a356533e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a20aaccf17bbbb89f24b27f29dcaf4c2f1d882d61a0a3dbe35ce19b8a356533e/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_registry-svllb_6462cf6d-b638-4950-bc58-6d40cfa1a9e9/registry/0.log" to get inode usage: stat /var/log/pods/kube-system_registry-svllb_6462cf6d-b638-4950-bc58-6d40cfa1a9e9/registry/0.log: no such file or directory
	Oct 26 00:56:33 addons-211632 kubelet[1560]: E1026 00:56:33.451358    1560 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2207c9a10aec04b7fdc771647406ba98ae04e255c3d1743f4289b42998ddd922/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2207c9a10aec04b7fdc771647406ba98ae04e255c3d1743f4289b42998ddd922/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_metrics-server-7c66d45ddc-8pc98_40138e51-703f-4aa0-b5ec-5392438b711d/metrics-server/0.log" to get inode usage: stat /var/log/pods/kube-system_metrics-server-7c66d45ddc-8pc98_40138e51-703f-4aa0-b5ec-5392438b711d/metrics-server/0.log: no such file or directory
	Oct 26 00:56:33 addons-211632 kubelet[1560]: E1026 00:56:33.454350    1560 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/871060d253c35dea64e061f47fc3b532c1a95b2ff14b326e8cdaab8d3c185ad4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/871060d253c35dea64e061f47fc3b532c1a95b2ff14b326e8cdaab8d3c185ad4/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-2b46c_be33d572-43f0-4e7a-b236-e6fe9722445a/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-2b46c_be33d572-43f0-4e7a-b236-e6fe9722445a/patch/1.log: no such file or directory
	Oct 26 00:56:33 addons-211632 kubelet[1560]: I1026 00:56:33.490244    1560 scope.go:117] "RemoveContainer" containerID="8c23e790af22a1def33a265bd1c92f70e7fc3317fda7d018be3c07a52826fecf"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: E1026 00:56:33.512831    1560 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: <nil>, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-2b46c_be33d572-43f0-4e7a-b236-e6fe9722445a/patch/2.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-2b46c_be33d572-43f0-4e7a-b236-e6fe9722445a/patch/2.log: no such file or directory
	Oct 26 00:56:33 addons-211632 kubelet[1560]: I1026 00:56:33.522559    1560 scope.go:117] "RemoveContainer" containerID="4b0c81db6d15632d7ab762670363a8b95e5d8dd311ab9631dfd3bf7a514ca19a"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: I1026 00:56:33.613384    1560 scope.go:117] "RemoveContainer" containerID="f8b4a1620ef2d76a2c43d223e10b4c5d91cff342a36ab833512415f2009d45ef"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: I1026 00:56:33.701872    1560 scope.go:117] "RemoveContainer" containerID="e27a4a51e5a11cfe713dfda5f1dc43aa505dc9653e4758c982cdfd4f931b29e9"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: I1026 00:56:33.704669    1560 topology_manager.go:215] "Topology Admit Handler" podUID="e421f409-489a-4f72-b0bb-cda0c60c345f" podNamespace="default" podName="task-pv-pod"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: E1026 00:56:33.704791    1560 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="40138e51-703f-4aa0-b5ec-5392438b711d" containerName="metrics-server"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: I1026 00:56:33.704844    1560 memory_manager.go:346] "RemoveStaleState removing state" podUID="40138e51-703f-4aa0-b5ec-5392438b711d" containerName="metrics-server"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: I1026 00:56:33.893996    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e421f409-489a-4f72-b0bb-cda0c60c345f-gcp-creds\") pod \"task-pv-pod\" (UID: \"e421f409-489a-4f72-b0bb-cda0c60c345f\") " pod="default/task-pv-pod"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: I1026 00:56:33.894057    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6r5b\" (UniqueName: \"kubernetes.io/projected/e421f409-489a-4f72-b0bb-cda0c60c345f-kube-api-access-b6r5b\") pod \"task-pv-pod\" (UID: \"e421f409-489a-4f72-b0bb-cda0c60c345f\") " pod="default/task-pv-pod"
	Oct 26 00:56:33 addons-211632 kubelet[1560]: I1026 00:56:33.894182    1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f2ec02ff-e4b3-4572-becf-0bed8c1048de\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^819344d2-739a-11ee-b475-9a47da7aa7bd\") pod \"task-pv-pod\" (UID: \"e421f409-489a-4f72-b0bb-cda0c60c345f\") " pod="default/task-pv-pod"
	Oct 26 00:56:34 addons-211632 kubelet[1560]: I1026 00:56:34.000004    1560 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-f2ec02ff-e4b3-4572-becf-0bed8c1048de\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^819344d2-739a-11ee-b475-9a47da7aa7bd\") pod \"task-pv-pod\" (UID: \"e421f409-489a-4f72-b0bb-cda0c60c345f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/f36db663b1505f2914f08e7a1f7a47d3d8f8b52adc6319eb5c65f595e8fe2306/globalmount\"" pod="default/task-pv-pod"
	
	* 
	* ==> storage-provisioner [f8417ef628e7e60f63892414fa42ad6a118481875325bbe75dcf8165c6387f45] <==
	* I1026 00:55:21.715098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 00:55:21.722579       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 00:55:21.722723       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 00:55:21.732375       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 00:55:21.732437       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0c4e8849-6c49-4d55-8889-30991b4ff466", APIVersion:"v1", ResourceVersion:"891", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-211632_10c8f30d-333e-4a05-ac23-355f0e36840c became leader
	I1026 00:55:21.732602       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-211632_10c8f30d-333e-4a05-ac23-355f0e36840c!
	I1026 00:55:21.833886       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-211632_10c8f30d-333e-4a05-ac23-355f0e36840c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-211632 -n addons-211632
helpers_test.go:261: (dbg) Run:  kubectl --context addons-211632 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: task-pv-pod ingress-nginx-admission-create-j8kjk ingress-nginx-admission-patch-b2ksj
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/InspektorGadget]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-211632 describe pod task-pv-pod ingress-nginx-admission-create-j8kjk ingress-nginx-admission-patch-b2ksj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-211632 describe pod task-pv-pod ingress-nginx-admission-create-j8kjk ingress-nginx-admission-patch-b2ksj: exit status 1 (68.84422ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-211632/192.168.49.2
	Start Time:       Thu, 26 Oct 2023 00:56:33 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b6r5b (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-b6r5b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/task-pv-pod to addons-211632
	  Normal  Pulling    0s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-j8kjk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b2ksj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-211632 describe pod task-pv-pod ingress-nginx-admission-create-j8kjk ingress-nginx-admission-patch-b2ksj: exit status 1
--- FAIL: TestAddons/parallel/InspektorGadget (8.83s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (187.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-075799 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-075799 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.504966278s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-075799 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-075799 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [804f148d-aa10-45fb-8ea0-7821cb50158f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [804f148d-aa10-45fb-8ea0-7821cb50158f] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.007625015s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-075799 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1026 01:06:09.210022   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:06:36.896146   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-075799 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.146985548s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-075799 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-075799 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1026 01:07:16.539730   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:07:16.545000   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:07:16.555253   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:07:16.575559   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:07:16.615837   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:07:16.696140   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:07:16.856541   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:07:17.177116   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:07:17.818098   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:07:19.098671   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.006446119s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-075799 addons disable ingress-dns --alsologtostderr -v=1
E1026 01:07:21.658849   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-075799 addons disable ingress-dns --alsologtostderr -v=1: (2.77893929s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-075799 addons disable ingress --alsologtostderr -v=1
E1026 01:07:26.779526   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-075799 addons disable ingress --alsologtostderr -v=1: (7.407063802s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-075799
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-075799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e7ff3f9af75fde04e87ce440917c5ec1ce4d561d8446d922760cfc426ef3f160",
	        "Created": "2023-10-26T01:03:16.463176858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 58582,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:03:16.830043085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/e7ff3f9af75fde04e87ce440917c5ec1ce4d561d8446d922760cfc426ef3f160/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e7ff3f9af75fde04e87ce440917c5ec1ce4d561d8446d922760cfc426ef3f160/hostname",
	        "HostsPath": "/var/lib/docker/containers/e7ff3f9af75fde04e87ce440917c5ec1ce4d561d8446d922760cfc426ef3f160/hosts",
	        "LogPath": "/var/lib/docker/containers/e7ff3f9af75fde04e87ce440917c5ec1ce4d561d8446d922760cfc426ef3f160/e7ff3f9af75fde04e87ce440917c5ec1ce4d561d8446d922760cfc426ef3f160-json.log",
	        "Name": "/ingress-addon-legacy-075799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-075799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-075799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/862c354f2c8b6f2152dae434fffb7514ee984f4c60661338dfa3e5ec23dc97ec-init/diff:/var/lib/docker/overlay2/007d7e88bd091d08c1a177e3000477192ad6785f5c636023d34df0777872a721/diff",
	                "MergedDir": "/var/lib/docker/overlay2/862c354f2c8b6f2152dae434fffb7514ee984f4c60661338dfa3e5ec23dc97ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/862c354f2c8b6f2152dae434fffb7514ee984f4c60661338dfa3e5ec23dc97ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/862c354f2c8b6f2152dae434fffb7514ee984f4c60661338dfa3e5ec23dc97ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-075799",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-075799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-075799",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-075799",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-075799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "506f0b0395e1d0d91e4d2b992b9f40b528362b7ad324c5c3a49c3beac00c6ead",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/506f0b0395e1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-075799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e7ff3f9af75f",
	                        "ingress-addon-legacy-075799"
	                    ],
	                    "NetworkID": "3d94f8596a57be98f1b458a7b04d63ff07ba49452cff6234fa4ee688ddbfb519",
	                    "EndpointID": "a5cef11ec2fa6c725d970e85b0135e9b5dc40c70e90de55b9e0b712d10765e18",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-075799 -n ingress-addon-legacy-075799
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-075799 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-075799 logs -n 25: (1.068538368s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-052267                                                   | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3888175978/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-052267 ssh findmnt                                          | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-052267                                                   | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3888175978/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| update-context | functional-052267                                                      | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-052267                                                      | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-052267                                                      | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| ssh            | functional-052267 ssh findmnt                                          | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| image          | functional-052267                                                      | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-052267 ssh findmnt                                          | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| image          | functional-052267                                                      | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-052267 ssh findmnt                                          | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| ssh            | functional-052267 ssh pgrep                                            | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| mount          | -p functional-052267                                                   | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| image          | functional-052267 image build -t                                       | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	|                | localhost/my-image:functional-052267                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-052267                                                      | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-052267                                                      | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-052267 image ls                                             | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:02 UTC | 26 Oct 23 01:02 UTC |
	| delete         | -p functional-052267                                                   | functional-052267           | jenkins | v1.31.2 | 26 Oct 23 01:03 UTC | 26 Oct 23 01:03 UTC |
	| start          | -p ingress-addon-legacy-075799                                         | ingress-addon-legacy-075799 | jenkins | v1.31.2 | 26 Oct 23 01:03 UTC | 26 Oct 23 01:04 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-075799                                            | ingress-addon-legacy-075799 | jenkins | v1.31.2 | 26 Oct 23 01:04 UTC | 26 Oct 23 01:04 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-075799                                            | ingress-addon-legacy-075799 | jenkins | v1.31.2 | 26 Oct 23 01:04 UTC | 26 Oct 23 01:04 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-075799                                            | ingress-addon-legacy-075799 | jenkins | v1.31.2 | 26 Oct 23 01:04 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-075799 ip                                         | ingress-addon-legacy-075799 | jenkins | v1.31.2 | 26 Oct 23 01:07 UTC | 26 Oct 23 01:07 UTC |
	| addons         | ingress-addon-legacy-075799                                            | ingress-addon-legacy-075799 | jenkins | v1.31.2 | 26 Oct 23 01:07 UTC | 26 Oct 23 01:07 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-075799                                            | ingress-addon-legacy-075799 | jenkins | v1.31.2 | 26 Oct 23 01:07 UTC | 26 Oct 23 01:07 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/26 01:03:03
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 01:03:03.655517   57917 out.go:296] Setting OutFile to fd 1 ...
	I1026 01:03:03.655654   57917 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:03:03.655665   57917 out.go:309] Setting ErrFile to fd 2...
	I1026 01:03:03.655672   57917 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:03:03.655868   57917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	I1026 01:03:03.656466   57917 out.go:303] Setting JSON to false
	I1026 01:03:03.657947   57917 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2729,"bootTime":1698279454,"procs":642,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:03:03.658013   57917 start.go:138] virtualization: kvm guest
	I1026 01:03:03.660368   57917 out.go:177] * [ingress-addon-legacy-075799] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:03:03.662712   57917 out.go:177]   - MINIKUBE_LOCATION=17491
	I1026 01:03:03.664381   57917 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:03:03.662731   57917 notify.go:220] Checking for updates...
	I1026 01:03:03.666259   57917 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:03:03.667876   57917 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	I1026 01:03:03.669413   57917 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:03:03.670853   57917 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:03:03.672547   57917 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 01:03:03.694145   57917 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1026 01:03:03.694246   57917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:03:03.745334   57917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-10-26 01:03:03.7367164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:03:03.745438   57917 docker.go:295] overlay module found
	I1026 01:03:03.747498   57917 out.go:177] * Using the docker driver based on user configuration
	I1026 01:03:03.748838   57917 start.go:298] selected driver: docker
	I1026 01:03:03.748850   57917 start.go:902] validating driver "docker" against <nil>
	I1026 01:03:03.748867   57917 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:03:03.749614   57917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:03:03.800261   57917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-10-26 01:03:03.79149572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:03:03.800439   57917 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1026 01:03:03.800685   57917 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:03:03.802707   57917 out.go:177] * Using Docker driver with root privileges
	I1026 01:03:03.804245   57917 cni.go:84] Creating CNI manager for ""
	I1026 01:03:03.804267   57917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 01:03:03.804285   57917 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 01:03:03.804300   57917 start_flags.go:323] config:
	{Name:ingress-addon-legacy-075799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-075799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 01:03:03.806038   57917 out.go:177] * Starting control plane node ingress-addon-legacy-075799 in cluster ingress-addon-legacy-075799
	I1026 01:03:03.807437   57917 cache.go:121] Beginning downloading kic base image for docker with crio
	I1026 01:03:03.808756   57917 out.go:177] * Pulling base image ...
	I1026 01:03:03.810207   57917 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1026 01:03:03.810235   57917 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1026 01:03:03.826009   57917 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1026 01:03:03.826031   57917 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1026 01:03:03.844135   57917 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1026 01:03:03.844162   57917 cache.go:56] Caching tarball of preloaded images
	I1026 01:03:03.844313   57917 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1026 01:03:03.846487   57917 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1026 01:03:03.847955   57917 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1026 01:03:03.883602   57917 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1026 01:03:08.048897   57917 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1026 01:03:08.049004   57917 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1026 01:03:09.061073   57917 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1026 01:03:09.061472   57917 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/config.json ...
	I1026 01:03:09.061506   57917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/config.json: {Name:mk32b0f7de2600fa95c8cbb6c9b7a2572b13142d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:03:09.061688   57917 cache.go:194] Successfully downloaded all kic artifacts
	I1026 01:03:09.061716   57917 start.go:365] acquiring machines lock for ingress-addon-legacy-075799: {Name:mk681021898abe7b4307c95542daa7f2f1fd3f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:03:09.061764   57917 start.go:369] acquired machines lock for "ingress-addon-legacy-075799" in 36.452µs
	I1026 01:03:09.061784   57917 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-075799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-075799 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:03:09.061890   57917 start.go:125] createHost starting for "" (driver="docker")
	I1026 01:03:09.064365   57917 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1026 01:03:09.064573   57917 start.go:159] libmachine.API.Create for "ingress-addon-legacy-075799" (driver="docker")
	I1026 01:03:09.064604   57917 client.go:168] LocalClient.Create starting
	I1026 01:03:09.064668   57917 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem
	I1026 01:03:09.064697   57917 main.go:141] libmachine: Decoding PEM data...
	I1026 01:03:09.064713   57917 main.go:141] libmachine: Parsing certificate...
	I1026 01:03:09.064760   57917 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem
	I1026 01:03:09.064780   57917 main.go:141] libmachine: Decoding PEM data...
	I1026 01:03:09.064793   57917 main.go:141] libmachine: Parsing certificate...
	I1026 01:03:09.065075   57917 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-075799 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 01:03:09.080918   57917 cli_runner.go:211] docker network inspect ingress-addon-legacy-075799 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 01:03:09.080991   57917 network_create.go:281] running [docker network inspect ingress-addon-legacy-075799] to gather additional debugging logs...
	I1026 01:03:09.081009   57917 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-075799
	W1026 01:03:09.096659   57917 cli_runner.go:211] docker network inspect ingress-addon-legacy-075799 returned with exit code 1
	I1026 01:03:09.096692   57917 network_create.go:284] error running [docker network inspect ingress-addon-legacy-075799]: docker network inspect ingress-addon-legacy-075799: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-075799 not found
	I1026 01:03:09.096711   57917 network_create.go:286] output of [docker network inspect ingress-addon-legacy-075799]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-075799 not found
	
	** /stderr **
	I1026 01:03:09.096842   57917 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 01:03:09.113998   57917 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004d5540}
	I1026 01:03:09.114044   57917 network_create.go:124] attempt to create docker network ingress-addon-legacy-075799 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 01:03:09.114092   57917 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-075799 ingress-addon-legacy-075799
	I1026 01:03:09.167447   57917 network_create.go:108] docker network ingress-addon-legacy-075799 192.168.49.0/24 created
	I1026 01:03:09.167475   57917 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-075799" container
	I1026 01:03:09.167542   57917 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 01:03:09.182894   57917 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-075799 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-075799 --label created_by.minikube.sigs.k8s.io=true
	I1026 01:03:09.199902   57917 oci.go:103] Successfully created a docker volume ingress-addon-legacy-075799
	I1026 01:03:09.199996   57917 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-075799-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-075799 --entrypoint /usr/bin/test -v ingress-addon-legacy-075799:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1026 01:03:10.960240   57917 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-075799-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-075799 --entrypoint /usr/bin/test -v ingress-addon-legacy-075799:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (1.760176088s)
	I1026 01:03:10.960279   57917 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-075799
	I1026 01:03:10.960300   57917 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1026 01:03:10.960319   57917 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 01:03:10.960388   57917 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-075799:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 01:03:16.397644   57917 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-075799:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.437204793s)
	I1026 01:03:16.397707   57917 kic.go:203] duration metric: took 5.437384 seconds to extract preloaded images to volume
	W1026 01:03:16.397914   57917 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 01:03:16.398035   57917 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 01:03:16.448595   57917 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-075799 --name ingress-addon-legacy-075799 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-075799 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-075799 --network ingress-addon-legacy-075799 --ip 192.168.49.2 --volume ingress-addon-legacy-075799:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1026 01:03:16.838405   57917 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-075799 --format={{.State.Running}}
	I1026 01:03:16.856049   57917 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-075799 --format={{.State.Status}}
	I1026 01:03:16.874996   57917 cli_runner.go:164] Run: docker exec ingress-addon-legacy-075799 stat /var/lib/dpkg/alternatives/iptables
	I1026 01:03:16.939253   57917 oci.go:144] the created container "ingress-addon-legacy-075799" has a running status.
	I1026 01:03:16.939285   57917 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/ingress-addon-legacy-075799/id_rsa...
	I1026 01:03:17.082872   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/ingress-addon-legacy-075799/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1026 01:03:17.082960   57917 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17491-8444/.minikube/machines/ingress-addon-legacy-075799/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 01:03:17.106089   57917 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-075799 --format={{.State.Status}}
	I1026 01:03:17.127233   57917 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 01:03:17.127256   57917 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-075799 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 01:03:17.199525   57917 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-075799 --format={{.State.Status}}
	I1026 01:03:17.216648   57917 machine.go:88] provisioning docker machine ...
	I1026 01:03:17.216700   57917 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-075799"
	I1026 01:03:17.216771   57917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-075799
	I1026 01:03:17.239252   57917 main.go:141] libmachine: Using SSH client type: native
	I1026 01:03:17.239964   57917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32789 <nil> <nil>}
	I1026 01:03:17.240000   57917 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-075799 && echo "ingress-addon-legacy-075799" | sudo tee /etc/hostname
	I1026 01:03:17.240790   57917 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56422->127.0.0.1:32789: read: connection reset by peer
	I1026 01:03:20.372063   57917 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-075799
	
	I1026 01:03:20.372151   57917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-075799
	I1026 01:03:20.389205   57917 main.go:141] libmachine: Using SSH client type: native
	I1026 01:03:20.389541   57917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32789 <nil> <nil>}
	I1026 01:03:20.389560   57917 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-075799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-075799/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-075799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:03:20.509603   57917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:03:20.509639   57917 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17491-8444/.minikube CaCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17491-8444/.minikube}
	I1026 01:03:20.509661   57917 ubuntu.go:177] setting up certificates
	I1026 01:03:20.509688   57917 provision.go:83] configureAuth start
	I1026 01:03:20.509749   57917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-075799
	I1026 01:03:20.525485   57917 provision.go:138] copyHostCerts
	I1026 01:03:20.525529   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem
	I1026 01:03:20.525566   57917 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem, removing ...
	I1026 01:03:20.525578   57917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem
	I1026 01:03:20.525650   57917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem (1078 bytes)
	I1026 01:03:20.525749   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem
	I1026 01:03:20.525774   57917 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem, removing ...
	I1026 01:03:20.525783   57917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem
	I1026 01:03:20.525820   57917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem (1123 bytes)
	I1026 01:03:20.525878   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem
	I1026 01:03:20.525909   57917 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem, removing ...
	I1026 01:03:20.525915   57917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem
	I1026 01:03:20.525949   57917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem (1675 bytes)
	I1026 01:03:20.526013   57917 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-075799 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-075799]
	I1026 01:03:20.597689   57917 provision.go:172] copyRemoteCerts
	I1026 01:03:20.597756   57917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:03:20.597800   57917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-075799
	I1026 01:03:20.614883   57917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/ingress-addon-legacy-075799/id_rsa Username:docker}
	I1026 01:03:20.705771   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:03:20.705837   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 01:03:20.728865   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:03:20.728934   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1026 01:03:20.749697   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:03:20.749761   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 01:03:20.769878   57917 provision.go:86] duration metric: configureAuth took 260.17313ms
	I1026 01:03:20.769904   57917 ubuntu.go:193] setting minikube options for container-runtime
	I1026 01:03:20.770147   57917 config.go:182] Loaded profile config "ingress-addon-legacy-075799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1026 01:03:20.770279   57917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-075799
	I1026 01:03:20.786626   57917 main.go:141] libmachine: Using SSH client type: native
	I1026 01:03:20.786991   57917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32789 <nil> <nil>}
	I1026 01:03:20.787016   57917 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:03:21.014816   57917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:03:21.014842   57917 machine.go:91] provisioned docker machine in 3.79816205s
	I1026 01:03:21.014854   57917 client.go:171] LocalClient.Create took 11.950241484s
	I1026 01:03:21.014874   57917 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-075799" took 11.950301602s
	I1026 01:03:21.014881   57917 start.go:300] post-start starting for "ingress-addon-legacy-075799" (driver="docker")
	I1026 01:03:21.014890   57917 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:03:21.014946   57917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:03:21.015004   57917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-075799
	I1026 01:03:21.032080   57917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/ingress-addon-legacy-075799/id_rsa Username:docker}
	I1026 01:03:21.118687   57917 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:03:21.121761   57917 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 01:03:21.121800   57917 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1026 01:03:21.121808   57917 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1026 01:03:21.121815   57917 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1026 01:03:21.121832   57917 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/addons for local assets ...
	I1026 01:03:21.121897   57917 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/files for local assets ...
	I1026 01:03:21.122001   57917 filesync.go:149] local asset: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem -> 152462.pem in /etc/ssl/certs
	I1026 01:03:21.122017   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem -> /etc/ssl/certs/152462.pem
	I1026 01:03:21.122110   57917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:03:21.131043   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem --> /etc/ssl/certs/152462.pem (1708 bytes)
	I1026 01:03:21.153020   57917 start.go:303] post-start completed in 138.122119ms
	I1026 01:03:21.153416   57917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-075799
	I1026 01:03:21.169859   57917 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/config.json ...
	I1026 01:03:21.170112   57917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:03:21.170152   57917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-075799
	I1026 01:03:21.186851   57917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/ingress-addon-legacy-075799/id_rsa Username:docker}
	I1026 01:03:21.270381   57917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 01:03:21.274522   57917 start.go:128] duration metric: createHost completed in 12.212609409s
	I1026 01:03:21.274548   57917 start.go:83] releasing machines lock for "ingress-addon-legacy-075799", held for 12.212771247s
	I1026 01:03:21.274619   57917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-075799
	I1026 01:03:21.290572   57917 ssh_runner.go:195] Run: cat /version.json
	I1026 01:03:21.290625   57917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-075799
	I1026 01:03:21.290669   57917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:03:21.290738   57917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-075799
	I1026 01:03:21.310532   57917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/ingress-addon-legacy-075799/id_rsa Username:docker}
	I1026 01:03:21.311045   57917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/ingress-addon-legacy-075799/id_rsa Username:docker}
	I1026 01:03:21.479477   57917 ssh_runner.go:195] Run: systemctl --version
	I1026 01:03:21.483533   57917 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:03:21.619064   57917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 01:03:21.623022   57917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:03:21.640053   57917 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1026 01:03:21.640129   57917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:03:21.666614   57917 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1026 01:03:21.666644   57917 start.go:472] detecting cgroup driver to use...
	I1026 01:03:21.666681   57917 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1026 01:03:21.666740   57917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:03:21.680273   57917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:03:21.690062   57917 docker.go:198] disabling cri-docker service (if available) ...
	I1026 01:03:21.690116   57917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:03:21.702267   57917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:03:21.714557   57917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:03:21.786131   57917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:03:21.862862   57917 docker.go:214] disabling docker service ...
	I1026 01:03:21.862913   57917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:03:21.879665   57917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:03:21.889617   57917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:03:21.963280   57917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:03:22.039787   57917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:03:22.050548   57917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:03:22.065525   57917 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1026 01:03:22.065593   57917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:03:22.074302   57917 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:03:22.074369   57917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:03:22.083136   57917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:03:22.091557   57917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:03:22.099845   57917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:03:22.107474   57917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:03:22.114894   57917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:03:22.122094   57917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:03:22.194533   57917 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:03:22.311241   57917 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:03:22.311310   57917 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:03:22.314698   57917 start.go:540] Will wait 60s for crictl version
	I1026 01:03:22.314758   57917 ssh_runner.go:195] Run: which crictl
	I1026 01:03:22.317728   57917 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:03:22.349017   57917 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1026 01:03:22.349096   57917 ssh_runner.go:195] Run: crio --version
	I1026 01:03:22.381585   57917 ssh_runner.go:195] Run: crio --version
	I1026 01:03:22.417147   57917 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1026 01:03:22.418691   57917 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-075799 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 01:03:22.434406   57917 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 01:03:22.438225   57917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:03:22.448257   57917 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1026 01:03:22.448330   57917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:03:22.492447   57917 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1026 01:03:22.492503   57917 ssh_runner.go:195] Run: which lz4
	I1026 01:03:22.495669   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1026 01:03:22.495758   57917 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1026 01:03:22.498952   57917 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 01:03:22.498982   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1026 01:03:23.425655   57917 crio.go:444] Took 0.929929 seconds to copy over tarball
	I1026 01:03:23.425741   57917 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 01:03:25.721739   57917 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.295963508s)
	I1026 01:03:25.721772   57917 crio.go:451] Took 2.296079 seconds to extract the tarball
	I1026 01:03:25.721783   57917 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 01:03:25.790224   57917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:03:25.821627   57917 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1026 01:03:25.821649   57917 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1026 01:03:25.821725   57917 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:03:25.821746   57917 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1026 01:03:25.821764   57917 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1026 01:03:25.821790   57917 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1026 01:03:25.821801   57917 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1026 01:03:25.821853   57917 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1026 01:03:25.821863   57917 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1026 01:03:25.821866   57917 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1026 01:03:25.823081   57917 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1026 01:03:25.823102   57917 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1026 01:03:25.823114   57917 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1026 01:03:25.823139   57917 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1026 01:03:25.823141   57917 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:03:25.823160   57917 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1026 01:03:25.823083   57917 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1026 01:03:25.823081   57917 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1026 01:03:26.005860   57917 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1026 01:03:26.007594   57917 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1026 01:03:26.008313   57917 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1026 01:03:26.012155   57917 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1026 01:03:26.016071   57917 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:03:26.051955   57917 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1026 01:03:26.064699   57917 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1026 01:03:26.069292   57917 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1026 01:03:26.100363   57917 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1026 01:03:26.100408   57917 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1026 01:03:26.100467   57917 ssh_runner.go:195] Run: which crictl
	I1026 01:03:26.100601   57917 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1026 01:03:26.100636   57917 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1026 01:03:26.100681   57917 ssh_runner.go:195] Run: which crictl
	I1026 01:03:26.107416   57917 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1026 01:03:26.107459   57917 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1026 01:03:26.107510   57917 ssh_runner.go:195] Run: which crictl
	I1026 01:03:26.109406   57917 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1026 01:03:26.109449   57917 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1026 01:03:26.109512   57917 ssh_runner.go:195] Run: which crictl
	I1026 01:03:26.211156   57917 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1026 01:03:26.211196   57917 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1026 01:03:26.211203   57917 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1026 01:03:26.211235   57917 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1026 01:03:26.211287   57917 ssh_runner.go:195] Run: which crictl
	I1026 01:03:26.211296   57917 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 01:03:26.211228   57917 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1026 01:03:26.211338   57917 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1026 01:03:26.211349   57917 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1026 01:03:26.211236   57917 ssh_runner.go:195] Run: which crictl
	I1026 01:03:26.211374   57917 ssh_runner.go:195] Run: which crictl
	I1026 01:03:26.211404   57917 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1026 01:03:26.211415   57917 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1026 01:03:26.309855   57917 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1026 01:03:26.309922   57917 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1026 01:03:26.309927   57917 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1026 01:03:26.310017   57917 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1026 01:03:26.310083   57917 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1026 01:03:26.310138   57917 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1026 01:03:26.310213   57917 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1026 01:03:26.397043   57917 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1026 01:03:26.397046   57917 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1026 01:03:26.397122   57917 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1026 01:03:26.397170   57917 cache_images.go:92] LoadImages completed in 575.507776ms
	W1026 01:03:26.397249   57917 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1026 01:03:26.397334   57917 ssh_runner.go:195] Run: crio config
	I1026 01:03:26.438022   57917 cni.go:84] Creating CNI manager for ""
	I1026 01:03:26.438045   57917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 01:03:26.438060   57917 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1026 01:03:26.438077   57917 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-075799 NodeName:ingress-addon-legacy-075799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1026 01:03:26.438213   57917 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-075799"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:03:26.438283   57917 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-075799 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-075799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1026 01:03:26.438327   57917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1026 01:03:26.446188   57917 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:03:26.446253   57917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 01:03:26.453942   57917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1026 01:03:26.469194   57917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1026 01:03:26.484931   57917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1026 01:03:26.500705   57917 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 01:03:26.503997   57917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:03:26.513918   57917 certs.go:56] Setting up /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799 for IP: 192.168.49.2
	I1026 01:03:26.513948   57917 certs.go:190] acquiring lock for shared ca certs: {Name:mk5c45c423cc5a6761a0ccf5b25a0c8b531fe271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:03:26.514078   57917 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key
	I1026 01:03:26.514118   57917 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key
	I1026 01:03:26.514158   57917 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.key
	I1026 01:03:26.514173   57917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt with IP's: []
	I1026 01:03:26.660085   57917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt ...
	I1026 01:03:26.660119   57917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: {Name:mk9e3faf109c47f54acd6002e460842e0e16663c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:03:26.660296   57917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.key ...
	I1026 01:03:26.660307   57917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.key: {Name:mk1456e10dc0471c99463f89b01909a74863bc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:03:26.660379   57917 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.key.dd3b5fb2
	I1026 01:03:26.660394   57917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1026 01:03:26.996276   57917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.crt.dd3b5fb2 ...
	I1026 01:03:26.996308   57917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.crt.dd3b5fb2: {Name:mk6a893a0c5c185867b1a653792186a04b470836 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:03:26.996468   57917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.key.dd3b5fb2 ...
	I1026 01:03:26.996478   57917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.key.dd3b5fb2: {Name:mk8b30ef2f5ef9666d1834b93489e2bbf82dc516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:03:26.996542   57917 certs.go:337] copying /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.crt
	I1026 01:03:26.996620   57917 certs.go:341] copying /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.key
	I1026 01:03:26.996673   57917 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/proxy-client.key
	I1026 01:03:26.996696   57917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/proxy-client.crt with IP's: []
	I1026 01:03:27.188852   57917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/proxy-client.crt ...
	I1026 01:03:27.188888   57917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/proxy-client.crt: {Name:mk53ce00bee439724d01d0a84100a6311741f110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:03:27.189052   57917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/proxy-client.key ...
	I1026 01:03:27.189066   57917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/proxy-client.key: {Name:mk8a99f1ca86fece8452d4f78306a4c45e9a7fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:03:27.189129   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:03:27.189149   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:03:27.189158   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:03:27.189171   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:03:27.189187   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:03:27.189200   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:03:27.189212   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:03:27.189255   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:03:27.189318   57917 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246.pem (1338 bytes)
	W1026 01:03:27.189353   57917 certs.go:433] ignoring /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246_empty.pem, impossibly tiny 0 bytes
	I1026 01:03:27.189363   57917 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 01:03:27.189389   57917 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem (1078 bytes)
	I1026 01:03:27.189413   57917 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:03:27.189434   57917 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem (1675 bytes)
	I1026 01:03:27.189473   57917 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem (1708 bytes)
	I1026 01:03:27.189502   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:03:27.189515   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246.pem -> /usr/share/ca-certificates/15246.pem
	I1026 01:03:27.189528   57917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem -> /usr/share/ca-certificates/152462.pem
	I1026 01:03:27.190093   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1026 01:03:27.211456   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:03:27.232074   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:03:27.252613   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 01:03:27.274322   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:03:27.295294   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:03:27.316785   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:03:27.337872   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1026 01:03:27.359750   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:03:27.381509   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246.pem --> /usr/share/ca-certificates/15246.pem (1338 bytes)
	I1026 01:03:27.403443   57917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem --> /usr/share/ca-certificates/152462.pem (1708 bytes)
	I1026 01:03:27.425622   57917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:03:27.442055   57917 ssh_runner.go:195] Run: openssl version
	I1026 01:03:27.446919   57917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:03:27.455261   57917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:03:27.458391   57917 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:54 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:03:27.458445   57917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:03:27.464558   57917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:03:27.472593   57917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15246.pem && ln -fs /usr/share/ca-certificates/15246.pem /etc/ssl/certs/15246.pem"
	I1026 01:03:27.480649   57917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15246.pem
	I1026 01:03:27.483785   57917 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 01:00 /usr/share/ca-certificates/15246.pem
	I1026 01:03:27.483826   57917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15246.pem
	I1026 01:03:27.489919   57917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15246.pem /etc/ssl/certs/51391683.0"
	I1026 01:03:27.498289   57917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152462.pem && ln -fs /usr/share/ca-certificates/152462.pem /etc/ssl/certs/152462.pem"
	I1026 01:03:27.506913   57917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152462.pem
	I1026 01:03:27.510185   57917 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 01:00 /usr/share/ca-certificates/152462.pem
	I1026 01:03:27.510242   57917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152462.pem
	I1026 01:03:27.516385   57917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152462.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:03:27.524651   57917 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1026 01:03:27.527618   57917 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1026 01:03:27.527663   57917 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-075799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-075799 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 01:03:27.527756   57917 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:03:27.527797   57917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:03:27.559920   57917 cri.go:89] found id: ""
	I1026 01:03:27.560008   57917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 01:03:27.568137   57917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 01:03:27.575894   57917 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1026 01:03:27.575968   57917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:03:27.583526   57917 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:03:27.583575   57917 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 01:03:27.625761   57917 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1026 01:03:27.625810   57917 kubeadm.go:322] [preflight] Running pre-flight checks
	I1026 01:03:27.663383   57917 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1026 01:03:27.663487   57917 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-gcp
	I1026 01:03:27.663549   57917 kubeadm.go:322] OS: Linux
	I1026 01:03:27.663617   57917 kubeadm.go:322] CGROUPS_CPU: enabled
	I1026 01:03:27.663669   57917 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1026 01:03:27.663733   57917 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1026 01:03:27.663807   57917 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1026 01:03:27.663875   57917 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1026 01:03:27.663946   57917 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1026 01:03:27.730408   57917 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:03:27.730548   57917 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:03:27.730737   57917 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 01:03:27.907393   57917 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:03:27.908309   57917 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:03:27.908387   57917 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1026 01:03:27.981903   57917 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:03:27.984881   57917 out.go:204]   - Generating certificates and keys ...
	I1026 01:03:27.985024   57917 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1026 01:03:27.985108   57917 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1026 01:03:28.174375   57917 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 01:03:28.322292   57917 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1026 01:03:28.510583   57917 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1026 01:03:28.603962   57917 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1026 01:03:28.778817   57917 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1026 01:03:28.778998   57917 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-075799 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 01:03:28.941876   57917 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1026 01:03:28.942062   57917 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-075799 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 01:03:29.043789   57917 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 01:03:29.367211   57917 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 01:03:29.512840   57917 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1026 01:03:29.512927   57917 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:03:29.879867   57917 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:03:30.202316   57917 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:03:30.296833   57917 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:03:30.451765   57917 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:03:30.452399   57917 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:03:30.454554   57917 out.go:204]   - Booting up control plane ...
	I1026 01:03:30.454661   57917 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:03:30.459723   57917 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:03:30.460739   57917 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:03:30.461438   57917 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:03:30.463420   57917 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 01:03:37.965850   57917 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502358 seconds
	I1026 01:03:37.965996   57917 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 01:03:37.976926   57917 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 01:03:38.492906   57917 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 01:03:38.493103   57917 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-075799 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1026 01:03:38.999917   57917 kubeadm.go:322] [bootstrap-token] Using token: 83zp9u.de6zut6g9telnffa
	I1026 01:03:39.001652   57917 out.go:204]   - Configuring RBAC rules ...
	I1026 01:03:39.001831   57917 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 01:03:39.004954   57917 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 01:03:39.010852   57917 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 01:03:39.012868   57917 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 01:03:39.015525   57917 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 01:03:39.018535   57917 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 01:03:39.025401   57917 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 01:03:39.196070   57917 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1026 01:03:39.415050   57917 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1026 01:03:39.416664   57917 kubeadm.go:322] 
	I1026 01:03:39.416749   57917 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1026 01:03:39.416763   57917 kubeadm.go:322] 
	I1026 01:03:39.416843   57917 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1026 01:03:39.416852   57917 kubeadm.go:322] 
	I1026 01:03:39.416872   57917 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1026 01:03:39.416931   57917 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 01:03:39.416979   57917 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 01:03:39.416986   57917 kubeadm.go:322] 
	I1026 01:03:39.417038   57917 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1026 01:03:39.417151   57917 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 01:03:39.417285   57917 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 01:03:39.417305   57917 kubeadm.go:322] 
	I1026 01:03:39.417414   57917 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 01:03:39.417548   57917 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1026 01:03:39.417562   57917 kubeadm.go:322] 
	I1026 01:03:39.417651   57917 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 83zp9u.de6zut6g9telnffa \
	I1026 01:03:39.417813   57917 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa \
	I1026 01:03:39.417856   57917 kubeadm.go:322]     --control-plane 
	I1026 01:03:39.417866   57917 kubeadm.go:322] 
	I1026 01:03:39.417957   57917 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1026 01:03:39.417966   57917 kubeadm.go:322] 
	I1026 01:03:39.418066   57917 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 83zp9u.de6zut6g9telnffa \
	I1026 01:03:39.418166   57917 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa 
	I1026 01:03:39.420075   57917 kubeadm.go:322] W1026 01:03:27.625222    1374 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1026 01:03:39.420339   57917 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1026 01:03:39.420488   57917 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:03:39.420656   57917 kubeadm.go:322] W1026 01:03:30.459468    1374 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1026 01:03:39.420831   57917 kubeadm.go:322] W1026 01:03:30.460537    1374 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1026 01:03:39.420859   57917 cni.go:84] Creating CNI manager for ""
	I1026 01:03:39.420873   57917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 01:03:39.422839   57917 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1026 01:03:39.424183   57917 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 01:03:39.427932   57917 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1026 01:03:39.427951   57917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1026 01:03:39.444388   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 01:03:39.864761   57917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 01:03:39.864827   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=af1d352f1030f8f3ea7f97e311e7fe82ef319942 minikube.k8s.io/name=ingress-addon-legacy-075799 minikube.k8s.io/updated_at=2023_10_26T01_03_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:39.864860   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:39.871877   57917 ops.go:34] apiserver oom_adj: -16
	I1026 01:03:39.941841   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:40.023301   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:40.588944   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:41.088939   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:41.589287   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:42.088404   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:42.588944   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:43.088710   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:43.588854   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:44.089225   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:44.589046   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:45.089318   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:45.588788   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:46.088738   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:46.589199   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:47.089251   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:47.589310   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:48.088765   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:48.588554   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:49.088926   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:49.588756   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:50.089168   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:50.589331   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:51.088677   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:51.588561   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:52.089159   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:52.589316   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:53.089108   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:53.589117   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:54.088692   57917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:03:54.219418   57917 kubeadm.go:1081] duration metric: took 14.354623484s to wait for elevateKubeSystemPrivileges.
	I1026 01:03:54.219455   57917 kubeadm.go:406] StartCluster complete in 26.691794853s
	I1026 01:03:54.219501   57917 settings.go:142] acquiring lock: {Name:mk3f6a6b512050e15c823ee035bfa16b068e5bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:03:54.219584   57917 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:03:54.220599   57917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/kubeconfig: {Name:mkd7fc4e7a7060baa25a329208944605474cc380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:03:54.220918   57917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 01:03:54.220998   57917 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1026 01:03:54.221068   57917 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-075799"
	I1026 01:03:54.221090   57917 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-075799"
	I1026 01:03:54.221143   57917 host.go:66] Checking if "ingress-addon-legacy-075799" exists ...
	I1026 01:03:54.221148   57917 config.go:182] Loaded profile config "ingress-addon-legacy-075799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1026 01:03:54.221203   57917 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-075799"
	I1026 01:03:54.221230   57917 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-075799"
	I1026 01:03:54.221569   57917 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-075799 --format={{.State.Status}}
	I1026 01:03:54.221570   57917 kapi.go:59] client config for ingress-addon-legacy-075799: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt", KeyFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.key", CAFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:03:54.221739   57917 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-075799 --format={{.State.Status}}
	I1026 01:03:54.222359   57917 cert_rotation.go:137] Starting client certificate rotation controller
	I1026 01:03:54.241469   57917 kapi.go:59] client config for ingress-addon-legacy-075799: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt", KeyFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.key", CAFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:03:54.241731   57917 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-075799"
	I1026 01:03:54.241777   57917 host.go:66] Checking if "ingress-addon-legacy-075799" exists ...
	I1026 01:03:54.242301   57917 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-075799 --format={{.State.Status}}
	I1026 01:03:54.245696   57917 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:03:54.247161   57917 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:03:54.247177   57917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 01:03:54.247219   57917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-075799
	I1026 01:03:54.258693   57917 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 01:03:54.258721   57917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 01:03:54.258783   57917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-075799
	I1026 01:03:54.265452   57917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/ingress-addon-legacy-075799/id_rsa Username:docker}
	I1026 01:03:54.275375   57917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/ingress-addon-legacy-075799/id_rsa Username:docker}
	I1026 01:03:54.300259   57917 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-075799" context rescaled to 1 replicas
	I1026 01:03:54.300305   57917 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:03:54.302189   57917 out.go:177] * Verifying Kubernetes components...
	I1026 01:03:54.304146   57917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:03:54.424890   57917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 01:03:54.425468   57917 kapi.go:59] client config for ingress-addon-legacy-075799: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt", KeyFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.key", CAFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:03:54.425849   57917 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-075799" to be "Ready" ...
	I1026 01:03:54.451664   57917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:03:54.453753   57917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 01:03:54.894944   57917 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 01:03:55.001215   57917 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1026 01:03:55.002729   57917 addons.go:502] enable addons completed in 781.732774ms: enabled=[storage-provisioner default-storageclass]
	I1026 01:03:56.434109   57917 node_ready.go:58] node "ingress-addon-legacy-075799" has status "Ready":"False"
	I1026 01:03:58.434373   57917 node_ready.go:58] node "ingress-addon-legacy-075799" has status "Ready":"False"
	I1026 01:04:00.047884   57917 node_ready.go:49] node "ingress-addon-legacy-075799" has status "Ready":"True"
	I1026 01:04:00.047914   57917 node_ready.go:38] duration metric: took 5.622039825s waiting for node "ingress-addon-legacy-075799" to be "Ready" ...
	I1026 01:04:00.047931   57917 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:04:00.116847   57917 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-75chh" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:02.125564   57917 pod_ready.go:102] pod "coredns-66bff467f8-75chh" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-26 01:03:54 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1026 01:04:04.627257   57917 pod_ready.go:102] pod "coredns-66bff467f8-75chh" in "kube-system" namespace has status "Ready":"False"
	I1026 01:04:07.126745   57917 pod_ready.go:102] pod "coredns-66bff467f8-75chh" in "kube-system" namespace has status "Ready":"False"
	I1026 01:04:09.126979   57917 pod_ready.go:102] pod "coredns-66bff467f8-75chh" in "kube-system" namespace has status "Ready":"False"
	I1026 01:04:11.127657   57917 pod_ready.go:102] pod "coredns-66bff467f8-75chh" in "kube-system" namespace has status "Ready":"False"
	I1026 01:04:13.126825   57917 pod_ready.go:92] pod "coredns-66bff467f8-75chh" in "kube-system" namespace has status "Ready":"True"
	I1026 01:04:13.126850   57917 pod_ready.go:81] duration metric: took 13.009969346s waiting for pod "coredns-66bff467f8-75chh" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:13.126861   57917 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-075799" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:13.131391   57917 pod_ready.go:92] pod "etcd-ingress-addon-legacy-075799" in "kube-system" namespace has status "Ready":"True"
	I1026 01:04:13.131414   57917 pod_ready.go:81] duration metric: took 4.547861ms waiting for pod "etcd-ingress-addon-legacy-075799" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:13.131431   57917 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-075799" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:13.135481   57917 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-075799" in "kube-system" namespace has status "Ready":"True"
	I1026 01:04:13.135500   57917 pod_ready.go:81] duration metric: took 4.062816ms waiting for pod "kube-apiserver-ingress-addon-legacy-075799" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:13.135508   57917 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-075799" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:13.139482   57917 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-075799" in "kube-system" namespace has status "Ready":"True"
	I1026 01:04:13.139499   57917 pod_ready.go:81] duration metric: took 3.985038ms waiting for pod "kube-controller-manager-ingress-addon-legacy-075799" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:13.139508   57917 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rgrw7" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:13.143224   57917 pod_ready.go:92] pod "kube-proxy-rgrw7" in "kube-system" namespace has status "Ready":"True"
	I1026 01:04:13.143242   57917 pod_ready.go:81] duration metric: took 3.728043ms waiting for pod "kube-proxy-rgrw7" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:13.143250   57917 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-075799" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:13.322693   57917 request.go:629] Waited for 179.364477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-075799
	I1026 01:04:13.521967   57917 request.go:629] Waited for 196.301976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-075799
	I1026 01:04:13.524656   57917 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-075799" in "kube-system" namespace has status "Ready":"True"
	I1026 01:04:13.524678   57917 pod_ready.go:81] duration metric: took 381.422007ms waiting for pod "kube-scheduler-ingress-addon-legacy-075799" in "kube-system" namespace to be "Ready" ...
	I1026 01:04:13.524689   57917 pod_ready.go:38] duration metric: took 13.476734975s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:04:13.524732   57917 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:04:13.524779   57917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:04:13.535251   57917 api_server.go:72] duration metric: took 19.234911259s to wait for apiserver process to appear ...
	I1026 01:04:13.535276   57917 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:04:13.535299   57917 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 01:04:13.539988   57917 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 01:04:13.540776   57917 api_server.go:141] control plane version: v1.18.20
	I1026 01:04:13.540796   57917 api_server.go:131] duration metric: took 5.515901ms to wait for apiserver health ...
	I1026 01:04:13.540805   57917 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 01:04:13.722370   57917 request.go:629] Waited for 181.503022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1026 01:04:13.730841   57917 system_pods.go:59] 8 kube-system pods found
	I1026 01:04:13.730872   57917 system_pods.go:61] "coredns-66bff467f8-75chh" [396af9ce-9a9d-404b-b2b0-b39098fbd6fe] Running
	I1026 01:04:13.730877   57917 system_pods.go:61] "etcd-ingress-addon-legacy-075799" [b87941f6-e6a5-4ef0-b04f-07541c3dbc85] Running
	I1026 01:04:13.730881   57917 system_pods.go:61] "kindnet-n949j" [c12bbc0a-d29d-47e3-bff3-33a40601665c] Running
	I1026 01:04:13.730885   57917 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-075799" [2b393b7b-22ab-4c77-a61c-c4ebbb0dc9dd] Running
	I1026 01:04:13.730890   57917 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-075799" [07fd73de-7000-4353-820e-fd1067b01615] Running
	I1026 01:04:13.730895   57917 system_pods.go:61] "kube-proxy-rgrw7" [5e8c501f-694c-4eda-bdc9-6212ddb695d0] Running
	I1026 01:04:13.730902   57917 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-075799" [48c7b68f-ee14-4a8f-b2d1-c056650dbba2] Running
	I1026 01:04:13.730907   57917 system_pods.go:61] "storage-provisioner" [6e47a3e4-9636-411b-9455-c859391d5f8b] Running
	I1026 01:04:13.730922   57917 system_pods.go:74] duration metric: took 190.112002ms to wait for pod list to return data ...
	I1026 01:04:13.730932   57917 default_sa.go:34] waiting for default service account to be created ...
	I1026 01:04:13.922372   57917 request.go:629] Waited for 191.359498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:04:13.924811   57917 default_sa.go:45] found service account: "default"
	I1026 01:04:13.924833   57917 default_sa.go:55] duration metric: took 193.891981ms for default service account to be created ...
	I1026 01:04:13.924841   57917 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 01:04:14.122278   57917 request.go:629] Waited for 197.368055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1026 01:04:14.127479   57917 system_pods.go:86] 8 kube-system pods found
	I1026 01:04:14.127504   57917 system_pods.go:89] "coredns-66bff467f8-75chh" [396af9ce-9a9d-404b-b2b0-b39098fbd6fe] Running
	I1026 01:04:14.127510   57917 system_pods.go:89] "etcd-ingress-addon-legacy-075799" [b87941f6-e6a5-4ef0-b04f-07541c3dbc85] Running
	I1026 01:04:14.127514   57917 system_pods.go:89] "kindnet-n949j" [c12bbc0a-d29d-47e3-bff3-33a40601665c] Running
	I1026 01:04:14.127518   57917 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-075799" [2b393b7b-22ab-4c77-a61c-c4ebbb0dc9dd] Running
	I1026 01:04:14.127522   57917 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-075799" [07fd73de-7000-4353-820e-fd1067b01615] Running
	I1026 01:04:14.127526   57917 system_pods.go:89] "kube-proxy-rgrw7" [5e8c501f-694c-4eda-bdc9-6212ddb695d0] Running
	I1026 01:04:14.127529   57917 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-075799" [48c7b68f-ee14-4a8f-b2d1-c056650dbba2] Running
	I1026 01:04:14.127533   57917 system_pods.go:89] "storage-provisioner" [6e47a3e4-9636-411b-9455-c859391d5f8b] Running
	I1026 01:04:14.127539   57917 system_pods.go:126] duration metric: took 202.693734ms to wait for k8s-apps to be running ...
	I1026 01:04:14.127547   57917 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 01:04:14.127590   57917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:04:14.138237   57917 system_svc.go:56] duration metric: took 10.674936ms WaitForService to wait for kubelet.
	I1026 01:04:14.138268   57917 kubeadm.go:581] duration metric: took 19.837936708s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1026 01:04:14.138290   57917 node_conditions.go:102] verifying NodePressure condition ...
	I1026 01:04:14.322630   57917 request.go:629] Waited for 184.262999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1026 01:04:14.325291   57917 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 01:04:14.325315   57917 node_conditions.go:123] node cpu capacity is 8
	I1026 01:04:14.325327   57917 node_conditions.go:105] duration metric: took 187.032379ms to run NodePressure ...
	I1026 01:04:14.325337   57917 start.go:228] waiting for startup goroutines ...
	I1026 01:04:14.325343   57917 start.go:233] waiting for cluster config update ...
	I1026 01:04:14.325356   57917 start.go:242] writing updated cluster config ...
	I1026 01:04:14.325663   57917 ssh_runner.go:195] Run: rm -f paused
	I1026 01:04:14.371948   57917 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1026 01:04:14.374280   57917 out.go:177] 
	W1026 01:04:14.376054   57917 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1026 01:04:14.377526   57917 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1026 01:04:14.379020   57917 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-075799" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 26 01:07:08 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:08.053633468Z" level=info msg="Started container" PID=4862 containerID=49744a156bbebb03fb4d8aefc5e10fe6b734f4811e387b2378e8150ae8894860 description=default/hello-world-app-5f5d8b66bb-x2r2b/hello-world-app id=c9f6a20d-c615-4ef7-94e3-15aba08928de name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=ef393eb7a3d5e90a8d9decfb78efa6a4d2a48f45bca69d781b87420671640d2d
	Oct 26 01:07:14 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:14.606833986Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=9c38c55d-ca6e-4a6f-b434-15fe725dc4a1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 26 01:07:23 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:23.607595072Z" level=info msg="Stopping pod sandbox: a65dddad8562f3c591367f25dec51bf9b68ff2c8702146dfba6bd0871f802f21" id=31e83fbd-a466-4a27-9dd4-d6269def8754 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 26 01:07:23 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:23.608631956Z" level=info msg="Stopped pod sandbox: a65dddad8562f3c591367f25dec51bf9b68ff2c8702146dfba6bd0871f802f21" id=31e83fbd-a466-4a27-9dd4-d6269def8754 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 26 01:07:24 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:24.139957323Z" level=info msg="Stopping pod sandbox: a65dddad8562f3c591367f25dec51bf9b68ff2c8702146dfba6bd0871f802f21" id=bd117889-b5ab-4df5-9828-4e624d936be3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 26 01:07:24 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:24.140024938Z" level=info msg="Stopped pod sandbox (already stopped): a65dddad8562f3c591367f25dec51bf9b68ff2c8702146dfba6bd0871f802f21" id=bd117889-b5ab-4df5-9828-4e624d936be3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 26 01:07:24 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:24.902600040Z" level=info msg="Stopping container: 123d60a447701d1c4d4fd9a95c8ca340060230d92da71e93adf0e0049583c5aa (timeout: 2s)" id=92bb2e5f-d370-43ba-8e23-875abc01fad1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 26 01:07:24 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:24.904386440Z" level=info msg="Stopping container: 123d60a447701d1c4d4fd9a95c8ca340060230d92da71e93adf0e0049583c5aa (timeout: 2s)" id=78c21bf1-9f91-40a2-bc1f-d0622d51b2ee name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 26 01:07:25 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:25.606476838Z" level=info msg="Stopping pod sandbox: a65dddad8562f3c591367f25dec51bf9b68ff2c8702146dfba6bd0871f802f21" id=0a1acc93-cc22-43a5-8a72-9ead18228582 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 26 01:07:25 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:25.606536880Z" level=info msg="Stopped pod sandbox (already stopped): a65dddad8562f3c591367f25dec51bf9b68ff2c8702146dfba6bd0871f802f21" id=0a1acc93-cc22-43a5-8a72-9ead18228582 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 26 01:07:26 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:26.911755525Z" level=warning msg="Stopping container 123d60a447701d1c4d4fd9a95c8ca340060230d92da71e93adf0e0049583c5aa with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=92bb2e5f-d370-43ba-8e23-875abc01fad1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 26 01:07:26 ingress-addon-legacy-075799 conmon[3400]: conmon 123d60a447701d1c4d4f <ninfo>: container 3412 exited with status 137
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.078737728Z" level=info msg="Stopped container 123d60a447701d1c4d4fd9a95c8ca340060230d92da71e93adf0e0049583c5aa: ingress-nginx/ingress-nginx-controller-7fcf777cb7-rfh7x/controller" id=92bb2e5f-d370-43ba-8e23-875abc01fad1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.078781046Z" level=info msg="Stopped container 123d60a447701d1c4d4fd9a95c8ca340060230d92da71e93adf0e0049583c5aa: ingress-nginx/ingress-nginx-controller-7fcf777cb7-rfh7x/controller" id=78c21bf1-9f91-40a2-bc1f-d0622d51b2ee name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.079370028Z" level=info msg="Stopping pod sandbox: 0d4e108fbedbccd819da8fedf74045e97951bbf4ee7f7516740e558ff8de4ee8" id=e2c91444-9a02-4ad8-82b5-e6443cef676a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.079384594Z" level=info msg="Stopping pod sandbox: 0d4e108fbedbccd819da8fedf74045e97951bbf4ee7f7516740e558ff8de4ee8" id=cb89e3ee-edd8-4e05-945a-3414e366aff3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.082223871Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-YMV7LQI3BW2MZO6M - [0:0]\n:KUBE-HP-3Z5JE4E52HIFZCXJ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-YMV7LQI3BW2MZO6M\n-X KUBE-HP-3Z5JE4E52HIFZCXJ\nCOMMIT\n"
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.083506075Z" level=info msg="Closing host port tcp:80"
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.083550755Z" level=info msg="Closing host port tcp:443"
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.084562525Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.084580980Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.084737996Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-rfh7x Namespace:ingress-nginx ID:0d4e108fbedbccd819da8fedf74045e97951bbf4ee7f7516740e558ff8de4ee8 UID:2bc28930-b9a9-4ec4-81bf-84318b3b5b41 NetNS:/var/run/netns/484a0b3c-424a-4017-b735-bea26085b50b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.084943896Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-rfh7x from CNI network \"kindnet\" (type=ptp)"
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.127216389Z" level=info msg="Stopped pod sandbox: 0d4e108fbedbccd819da8fedf74045e97951bbf4ee7f7516740e558ff8de4ee8" id=e2c91444-9a02-4ad8-82b5-e6443cef676a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 26 01:07:27 ingress-addon-legacy-075799 crio[961]: time="2023-10-26 01:07:27.127366952Z" level=info msg="Stopped pod sandbox (already stopped): 0d4e108fbedbccd819da8fedf74045e97951bbf4ee7f7516740e558ff8de4ee8" id=cb89e3ee-edd8-4e05-945a-3414e366aff3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	49744a156bbeb       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6            24 seconds ago      Running             hello-world-app           0                   ef393eb7a3d5e       hello-world-app-5f5d8b66bb-x2r2b
	7a975e900a377       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   748769452a4b0       nginx
	123d60a447701       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   0d4e108fbedbc       ingress-nginx-controller-7fcf777cb7-rfh7x
	2ca9099a89ea1       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   d73d66e474391       ingress-nginx-admission-patch-tj8bb
	3d9b347ff2e20       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   5b9f40f9436db       ingress-nginx-admission-create-v7mt8
	4c2262e4595de       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   586d9802365ff       coredns-66bff467f8-75chh
	537dbccf175c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   c1e1a0cf51c51       storage-provisioner
	b64e1283f278b       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   6965127d49c42       kindnet-n949j
	18e255bd6492a       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   82ef12ce92253       kube-proxy-rgrw7
	f2d74809e64ac       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   92e8067fea838       etcd-ingress-addon-legacy-075799
	4b1c3cfaea9bc       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   39e0c91cd87a4       kube-apiserver-ingress-addon-legacy-075799
	e8162c7f8e539       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   e8b1d2d94d69c       kube-scheduler-ingress-addon-legacy-075799
	944aad05add68       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   bb0818cd19add       kube-controller-manager-ingress-addon-legacy-075799
	
	* 
	* ==> coredns [4c2262e4595de10e09042f2d13a08c0fe9404c7716caa85d1cc1287112a57040] <==
	* [INFO] 10.244.0.5:46404 - 54757 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005699526s
	[INFO] 10.244.0.5:52766 - 22303 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005879992s
	[INFO] 10.244.0.5:38451 - 44645 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006369176s
	[INFO] 10.244.0.5:60618 - 54394 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006174169s
	[INFO] 10.244.0.5:57203 - 17626 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006366123s
	[INFO] 10.244.0.5:53224 - 52696 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006277943s
	[INFO] 10.244.0.5:46404 - 17429 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006118516s
	[INFO] 10.244.0.5:46486 - 36731 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006287745s
	[INFO] 10.244.0.5:59181 - 515 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006551999s
	[INFO] 10.244.0.5:52766 - 21962 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007275324s
	[INFO] 10.244.0.5:53224 - 31278 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006886846s
	[INFO] 10.244.0.5:46404 - 5455 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006894707s
	[INFO] 10.244.0.5:46486 - 32198 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007053377s
	[INFO] 10.244.0.5:57203 - 6190 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007024836s
	[INFO] 10.244.0.5:53224 - 15091 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060993s
	[INFO] 10.244.0.5:60618 - 56517 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007000853s
	[INFO] 10.244.0.5:46404 - 29791 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051225s
	[INFO] 10.244.0.5:59181 - 34764 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007116195s
	[INFO] 10.244.0.5:57203 - 63602 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055878s
	[INFO] 10.244.0.5:38451 - 31265 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007155364s
	[INFO] 10.244.0.5:59181 - 11040 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049407s
	[INFO] 10.244.0.5:52766 - 5438 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037479s
	[INFO] 10.244.0.5:60618 - 31563 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000116208s
	[INFO] 10.244.0.5:46486 - 59002 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000085352s
	[INFO] 10.244.0.5:38451 - 7005 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073908s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-075799
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-075799
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af1d352f1030f8f3ea7f97e311e7fe82ef319942
	                    minikube.k8s.io/name=ingress-addon-legacy-075799
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_26T01_03_39_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 01:03:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-075799
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 01:07:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 01:07:09 +0000   Thu, 26 Oct 2023 01:03:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 01:07:09 +0000   Thu, 26 Oct 2023 01:03:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 01:07:09 +0000   Thu, 26 Oct 2023 01:03:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 01:07:09 +0000   Thu, 26 Oct 2023 01:03:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-075799
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 64d6306e61ef4780b1759c12e093e2f3
	  System UUID:                d99314d8-3a0f-48f7-bd7d-65a05d472c92
	  Boot ID:                    37a42525-bdda-4c41-ac15-6bc286a851a0
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-x2r2b                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 coredns-66bff467f8-75chh                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m38s
	  kube-system                 etcd-ingress-addon-legacy-075799                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kindnet-n949j                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m38s
	  kube-system                 kube-apiserver-ingress-addon-legacy-075799             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-075799    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-proxy-rgrw7                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-scheduler-ingress-addon-legacy-075799             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m1s (x4 over 4m1s)  kubelet     Node ingress-addon-legacy-075799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x4 over 4m1s)  kubelet     Node ingress-addon-legacy-075799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x4 over 4m1s)  kubelet     Node ingress-addon-legacy-075799 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m53s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m53s                kubelet     Node ingress-addon-legacy-075799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s                kubelet     Node ingress-addon-legacy-075799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s                kubelet     Node ingress-addon-legacy-075799 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m37s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m33s                kubelet     Node ingress-addon-legacy-075799 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004952] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006580] FS-Cache: N-cookie d=00000000f988483e{9p.inode} n=00000000d3a39bfe
	[  +0.008740] FS-Cache: N-key=[8] '8ca00f0200000000'
	[  +0.289092] FS-Cache: Duplicate cookie detected
	[  +0.004711] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006774] FS-Cache: O-cookie d=00000000f988483e{9p.inode} n=00000000bc03619b
	[  +0.007366] FS-Cache: O-key=[8] '95a00f0200000000'
	[  +0.004973] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007947] FS-Cache: N-cookie d=00000000f988483e{9p.inode} n=0000000035663f65
	[  +0.008716] FS-Cache: N-key=[8] '95a00f0200000000'
	[  +5.566645] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct26 01:04] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[  +1.027979] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[  +2.015817] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[Oct26 01:05] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[  +8.187230] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[ +16.126419] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[ +32.764868] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	
	* 
	* ==> etcd [f2d74809e64acd611df0f23ba6a7d7f3f2e874e1564e59cadae0a9cfa76341e1] <==
	* 2023-10-26 01:03:32.199070 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-26 01:03:32.200439 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/10/26 01:03:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-26 01:03:32.201253 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-26 01:03:32.201598 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-26 01:03:32.201800 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-26 01:03:32.201966 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/10/26 01:03:33 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/26 01:03:33 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/26 01:03:33 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/26 01:03:33 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/26 01:03:33 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-26 01:03:33.193357 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-26 01:03:33.194238 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-26 01:03:33.194303 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-26 01:03:33.194322 I | embed: ready to serve client requests
	2023-10-26 01:03:33.194329 I | embed: ready to serve client requests
	2023-10-26 01:03:33.194459 I | etcdserver: published {Name:ingress-addon-legacy-075799 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-26 01:03:33.196334 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-26 01:03:33.196565 I | embed: serving client requests on 192.168.49.2:2379
	2023-10-26 01:03:59.062613 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-075799\" " with result "range_response_count:1 size:6604" took too long (129.597678ms) to execute
	2023-10-26 01:04:00.045317 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-75chh\" " with result "range_response_count:1 size:3753" took too long (200.681713ms) to execute
	2023-10-26 01:04:00.045583 W | etcdserver: request "header:<ID:8128024712766514156 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:384 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:2626 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>" with result "size:16" took too long (129.615954ms) to execute
	2023-10-26 01:04:00.045787 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-66bff467f8-75chh.179182b1b260da68\" " with result "range_response_count:1 size:829" took too long (200.890123ms) to execute
	2023-10-26 01:04:00.045828 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-075799\" " with result "range_response_count:1 size:6390" took too long (112.943758ms) to execute
	
	* 
	* ==> kernel <==
	*  01:07:32 up 49 min,  0 users,  load average: 0.17, 0.59, 0.49
	Linux ingress-addon-legacy-075799 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [b64e1283f278b80de68d565aecad9ace11548a09d221a9757f6fef580d12fee8] <==
	* I1026 01:05:28.428315       1 main.go:227] handling current node
	I1026 01:05:38.436825       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:05:38.436851       1 main.go:227] handling current node
	I1026 01:05:48.440904       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:05:48.440926       1 main.go:227] handling current node
	I1026 01:05:58.452696       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:05:58.452722       1 main.go:227] handling current node
	I1026 01:06:08.455925       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:06:08.455950       1 main.go:227] handling current node
	I1026 01:06:18.464835       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:06:18.464861       1 main.go:227] handling current node
	I1026 01:06:28.468873       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:06:28.468900       1 main.go:227] handling current node
	I1026 01:06:38.480991       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:06:38.481022       1 main.go:227] handling current node
	I1026 01:06:48.485420       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:06:48.485446       1 main.go:227] handling current node
	I1026 01:06:58.488577       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:06:58.488601       1 main.go:227] handling current node
	I1026 01:07:08.500627       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:07:08.500654       1 main.go:227] handling current node
	I1026 01:07:18.506574       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:07:18.506600       1 main.go:227] handling current node
	I1026 01:07:28.509923       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1026 01:07:28.509951       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [4b1c3cfaea9bc3cebea4277a923eca8bb92a55afcd9b604ff32516cf1af886a8] <==
	* I1026 01:03:36.300994       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E1026 01:03:36.302562       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1026 01:03:36.399604       1 cache.go:39] Caches are synced for autoregister controller
	I1026 01:03:36.400303       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 01:03:36.400450       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1026 01:03:36.400519       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1026 01:03:36.401477       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 01:03:37.298602       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1026 01:03:37.298635       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1026 01:03:37.303302       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1026 01:03:37.306013       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1026 01:03:37.306041       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1026 01:03:37.634584       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 01:03:37.692961       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1026 01:03:37.816072       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1026 01:03:37.816934       1 controller.go:609] quota admission added evaluator for: endpoints
	I1026 01:03:37.819933       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 01:03:38.582098       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1026 01:03:39.178692       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1026 01:03:39.406526       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1026 01:03:39.591400       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 01:03:54.138537       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1026 01:03:54.703392       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1026 01:04:15.139841       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1026 01:04:43.632525       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [944aad05add686008aee34b36575026bc20ef84c7cf7285fcefaf351b0fa828e] <==
	* E1026 01:03:54.714687       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	E1026 01:03:54.714906       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I1026 01:03:54.715160       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d8c29e31-c9e5-467c-974d-ebfa5a9a0c14", APIVersion:"apps/v1", ResourceVersion:"215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-rgrw7
	I1026 01:03:54.717773       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"ccb84aba-fd40-480f-b8d2-e8619a753ccb", APIVersion:"apps/v1", ResourceVersion:"238", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-n949j
	I1026 01:03:54.790027       1 shared_informer.go:230] Caches are synced for job 
	I1026 01:03:54.790039       1 shared_informer.go:230] Caches are synced for resource quota 
	I1026 01:03:54.790432       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1026 01:03:54.790459       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1026 01:03:54.791003       1 shared_informer.go:230] Caches are synced for resource quota 
	E1026 01:03:54.816378       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"d8c29e31-c9e5-467c-974d-ebfa5a9a0c14", ResourceVersion:"215", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63833879019, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00124c040), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc00124c0a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00124c100), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0013eaec0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc00124c160), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00124c1c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00124c280)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0012c2280), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0013baa68), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000471dc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000d4e440)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0013baac8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1026 01:03:54.818313       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"ccb84aba-fd40-480f-b8d2-e8619a753ccb", ResourceVersion:"238", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63833879019, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00124c3c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00124c420)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00124c480), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00124c4e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00124c540), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00124c5c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00124c620)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00124c6e0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0012c2500), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0013bacc8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000471e30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000d4e448)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0013bad10)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1026 01:03:55.084307       1 request.go:621] Throttling request took 1.043876157s, request: GET:https://control-plane.minikube.internal:8443/apis/batch/v1beta1?timeout=32s
	I1026 01:03:55.535522       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1026 01:03:55.535573       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1026 01:04:04.190851       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1026 01:04:15.133964       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a1f4ffab-d1c6-435f-a7a9-b41bea5870a4", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1026 01:04:15.141338       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"eaf9a188-9b02-43b1-894e-17ec34ca63fd", APIVersion:"apps/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-rfh7x
	I1026 01:04:15.195291       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"eb1dace7-e429-4961-ad32-efb4f771c358", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-v7mt8
	I1026 01:04:15.208907       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"860011d5-7aaa-4280-87f2-2627c692c13b", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-tj8bb
	I1026 01:04:18.739787       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"eb1dace7-e429-4961-ad32-efb4f771c358", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1026 01:04:18.746591       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"860011d5-7aaa-4280-87f2-2627c692c13b", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1026 01:07:06.148438       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"22f399c2-f640-4a97-be83-995a4fb3128d", APIVersion:"apps/v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1026 01:07:06.153219       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"a16be5b8-2b78-4f15-b2ec-62a1024b9d85", APIVersion:"apps/v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-x2r2b
	
	* 
	* ==> kube-proxy [18e255bd6492ae6cf3af2aa410757dcfec972701c46b9f273045d52e972bf6fc] <==
	* W1026 01:03:55.276619       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1026 01:03:55.282855       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1026 01:03:55.282891       1 server_others.go:186] Using iptables Proxier.
	I1026 01:03:55.283138       1 server.go:583] Version: v1.18.20
	I1026 01:03:55.283561       1 config.go:133] Starting endpoints config controller
	I1026 01:03:55.283587       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1026 01:03:55.283694       1 config.go:315] Starting service config controller
	I1026 01:03:55.283713       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1026 01:03:55.383806       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1026 01:03:55.383892       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [e8162c7f8e539867181643e6f4020f3031ff385af806bef8dab98b99e318635d] <==
	* W1026 01:03:36.316597       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 01:03:36.316720       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 01:03:36.316761       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 01:03:36.316811       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 01:03:36.404794       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1026 01:03:36.404818       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1026 01:03:36.409641       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 01:03:36.409750       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 01:03:36.409815       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1026 01:03:36.409883       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1026 01:03:36.412290       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 01:03:36.412473       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 01:03:36.412505       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 01:03:36.412567       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 01:03:36.412597       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1026 01:03:36.412587       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 01:03:36.412691       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 01:03:36.412723       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 01:03:36.412798       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 01:03:36.412948       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1026 01:03:36.412953       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 01:03:36.412998       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 01:03:37.374229       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1026 01:03:37.909943       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1026 01:03:54.990675       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Oct 26 01:06:48 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:06:48.607321    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 26 01:06:48 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:06:48.607364    1868 pod_workers.go:191] Error syncing pod abea9787-ef57-4762-90e5-8181a7ca59d0 ("kube-ingress-dns-minikube_kube-system(abea9787-ef57-4762-90e5-8181a7ca59d0)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 26 01:07:01 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:07:01.607083    1868 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 26 01:07:01 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:07:01.607136    1868 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 26 01:07:01 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:07:01.607195    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 26 01:07:01 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:07:01.607230    1868 pod_workers.go:191] Error syncing pod abea9787-ef57-4762-90e5-8181a7ca59d0 ("kube-ingress-dns-minikube_kube-system(abea9787-ef57-4762-90e5-8181a7ca59d0)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 26 01:07:06 ingress-addon-legacy-075799 kubelet[1868]: I1026 01:07:06.158688    1868 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 26 01:07:06 ingress-addon-legacy-075799 kubelet[1868]: I1026 01:07:06.328429    1868 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-nmh9v" (UniqueName: "kubernetes.io/secret/6f6d0e89-a2e4-4f0d-80c9-685432b89a53-default-token-nmh9v") pod "hello-world-app-5f5d8b66bb-x2r2b" (UID: "6f6d0e89-a2e4-4f0d-80c9-685432b89a53")
	Oct 26 01:07:06 ingress-addon-legacy-075799 kubelet[1868]: W1026 01:07:06.530579    1868 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/e7ff3f9af75fde04e87ce440917c5ec1ce4d561d8446d922760cfc426ef3f160/crio-ef393eb7a3d5e90a8d9decfb78efa6a4d2a48f45bca69d781b87420671640d2d WatchSource:0}: Error finding container ef393eb7a3d5e90a8d9decfb78efa6a4d2a48f45bca69d781b87420671640d2d: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000cb3da0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Oct 26 01:07:14 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:07:14.607175    1868 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 26 01:07:14 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:07:14.607217    1868 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 26 01:07:14 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:07:14.607268    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 26 01:07:14 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:07:14.607302    1868 pod_workers.go:191] Error syncing pod abea9787-ef57-4762-90e5-8181a7ca59d0 ("kube-ingress-dns-minikube_kube-system(abea9787-ef57-4762-90e5-8181a7ca59d0)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 26 01:07:21 ingress-addon-legacy-075799 kubelet[1868]: I1026 01:07:21.925999    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-6mzd5" (UniqueName: "kubernetes.io/secret/abea9787-ef57-4762-90e5-8181a7ca59d0-minikube-ingress-dns-token-6mzd5") pod "abea9787-ef57-4762-90e5-8181a7ca59d0" (UID: "abea9787-ef57-4762-90e5-8181a7ca59d0")
	Oct 26 01:07:21 ingress-addon-legacy-075799 kubelet[1868]: I1026 01:07:21.927861    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abea9787-ef57-4762-90e5-8181a7ca59d0-minikube-ingress-dns-token-6mzd5" (OuterVolumeSpecName: "minikube-ingress-dns-token-6mzd5") pod "abea9787-ef57-4762-90e5-8181a7ca59d0" (UID: "abea9787-ef57-4762-90e5-8181a7ca59d0"). InnerVolumeSpecName "minikube-ingress-dns-token-6mzd5". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 26 01:07:22 ingress-addon-legacy-075799 kubelet[1868]: I1026 01:07:22.026355    1868 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-6mzd5" (UniqueName: "kubernetes.io/secret/abea9787-ef57-4762-90e5-8181a7ca59d0-minikube-ingress-dns-token-6mzd5") on node "ingress-addon-legacy-075799" DevicePath ""
	Oct 26 01:07:24 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:07:24.903689    1868 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-rfh7x.179182e2c0acd952", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-rfh7x", UID:"2bc28930-b9a9-4ec4-81bf-84318b3b5b41", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-075799"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1468cd335c5e152, ext:225757246247, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1468cd335c5e152, ext:225757246247, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-rfh7x.179182e2c0acd952" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 26 01:07:24 ingress-addon-legacy-075799 kubelet[1868]: E1026 01:07:24.907329    1868 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-rfh7x.179182e2c0acd952", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-rfh7x", UID:"2bc28930-b9a9-4ec4-81bf-84318b3b5b41", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-075799"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1468cd335c5e152, ext:225757246247, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1468cd335e3c8c7, ext:225759206036, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-rfh7x.179182e2c0acd952" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 26 01:07:27 ingress-addon-legacy-075799 kubelet[1868]: W1026 01:07:27.134758    1868 pod_container_deletor.go:77] Container "0d4e108fbedbccd819da8fedf74045e97951bbf4ee7f7516740e558ff8de4ee8" not found in pod's containers
	Oct 26 01:07:29 ingress-addon-legacy-075799 kubelet[1868]: I1026 01:07:29.043158    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-9h9m4" (UniqueName: "kubernetes.io/secret/2bc28930-b9a9-4ec4-81bf-84318b3b5b41-ingress-nginx-token-9h9m4") pod "2bc28930-b9a9-4ec4-81bf-84318b3b5b41" (UID: "2bc28930-b9a9-4ec4-81bf-84318b3b5b41")
	Oct 26 01:07:29 ingress-addon-legacy-075799 kubelet[1868]: I1026 01:07:29.043211    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2bc28930-b9a9-4ec4-81bf-84318b3b5b41-webhook-cert") pod "2bc28930-b9a9-4ec4-81bf-84318b3b5b41" (UID: "2bc28930-b9a9-4ec4-81bf-84318b3b5b41")
	Oct 26 01:07:29 ingress-addon-legacy-075799 kubelet[1868]: I1026 01:07:29.045264    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bc28930-b9a9-4ec4-81bf-84318b3b5b41-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2bc28930-b9a9-4ec4-81bf-84318b3b5b41" (UID: "2bc28930-b9a9-4ec4-81bf-84318b3b5b41"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 26 01:07:29 ingress-addon-legacy-075799 kubelet[1868]: I1026 01:07:29.045506    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bc28930-b9a9-4ec4-81bf-84318b3b5b41-ingress-nginx-token-9h9m4" (OuterVolumeSpecName: "ingress-nginx-token-9h9m4") pod "2bc28930-b9a9-4ec4-81bf-84318b3b5b41" (UID: "2bc28930-b9a9-4ec4-81bf-84318b3b5b41"). InnerVolumeSpecName "ingress-nginx-token-9h9m4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 26 01:07:29 ingress-addon-legacy-075799 kubelet[1868]: I1026 01:07:29.143456    1868 reconciler.go:319] Volume detached for volume "ingress-nginx-token-9h9m4" (UniqueName: "kubernetes.io/secret/2bc28930-b9a9-4ec4-81bf-84318b3b5b41-ingress-nginx-token-9h9m4") on node "ingress-addon-legacy-075799" DevicePath ""
	Oct 26 01:07:29 ingress-addon-legacy-075799 kubelet[1868]: I1026 01:07:29.143483    1868 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2bc28930-b9a9-4ec4-81bf-84318b3b5b41-webhook-cert") on node "ingress-addon-legacy-075799" DevicePath ""
	
	* 
	* ==> storage-provisioner [537dbccf175c1bddf926a558ae6bc69cdc6c3ff61a102fb3bda0f953c3d06bd4] <==
	* I1026 01:04:00.832437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 01:04:00.840236       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 01:04:00.840290       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 01:04:00.845781       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 01:04:00.845914       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-075799_082dbc28-48dd-4d31-a720-5ab26cbde045!
	I1026 01:04:00.846025       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1698daea-7447-4b64-9bd1-bd29664672af", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-075799_082dbc28-48dd-4d31-a720-5ab26cbde045 became leader
	I1026 01:04:00.946739       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-075799_082dbc28-48dd-4d31-a720-5ab26cbde045!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-075799 -n ingress-addon-legacy-075799
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-075799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (187.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-j4c2s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-j4c2s -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-j4c2s -- sh -c "ping -c 1 192.168.58.1": exit status 1 (180.50999ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-j4c2s): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-lvqzv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-lvqzv -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-lvqzv -- sh -c "ping -c 1 192.168.58.1": exit status 1 (188.669948ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-lvqzv): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-204768
helpers_test.go:235: (dbg) docker inspect multinode-204768:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344",
	        "Created": "2023-10-26T01:12:36.407740912Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 104668,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:12:36.687885869Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344/hostname",
	        "HostsPath": "/var/lib/docker/containers/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344/hosts",
	        "LogPath": "/var/lib/docker/containers/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344-json.log",
	        "Name": "/multinode-204768",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-204768:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-204768",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4bbb75c0288db46f02f7dd809dfffa0c8ca00d84ac1e7cc836259e1e823721f5-init/diff:/var/lib/docker/overlay2/007d7e88bd091d08c1a177e3000477192ad6785f5c636023d34df0777872a721/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4bbb75c0288db46f02f7dd809dfffa0c8ca00d84ac1e7cc836259e1e823721f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4bbb75c0288db46f02f7dd809dfffa0c8ca00d84ac1e7cc836259e1e823721f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4bbb75c0288db46f02f7dd809dfffa0c8ca00d84ac1e7cc836259e1e823721f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-204768",
	                "Source": "/var/lib/docker/volumes/multinode-204768/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-204768",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-204768",
	                "name.minikube.sigs.k8s.io": "multinode-204768",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "949dd2bb5a9e38964f031be5baecf14b435842e410566fd983fe7f434c1ed9fe",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/949dd2bb5a9e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-204768": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "704cc6eb735c",
	                        "multinode-204768"
	                    ],
	                    "NetworkID": "3243a6b25050a7b3cfe1ab4e961857ab07740decea5804989a3c2b50c63798a0",
	                    "EndpointID": "de4a96a730eb354fddeb714c341b7b1947a69d3f0e343a2010b9562a5a160d89",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-204768 -n multinode-204768
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-204768 logs -n 25: (1.365673181s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-632991                           | mount-start-2-632991 | jenkins | v1.31.2 | 26 Oct 23 01:12 UTC | 26 Oct 23 01:12 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-632991 ssh -- ls                    | mount-start-2-632991 | jenkins | v1.31.2 | 26 Oct 23 01:12 UTC | 26 Oct 23 01:12 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-614829                           | mount-start-1-614829 | jenkins | v1.31.2 | 26 Oct 23 01:12 UTC | 26 Oct 23 01:12 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-632991 ssh -- ls                    | mount-start-2-632991 | jenkins | v1.31.2 | 26 Oct 23 01:12 UTC | 26 Oct 23 01:12 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-632991                           | mount-start-2-632991 | jenkins | v1.31.2 | 26 Oct 23 01:12 UTC | 26 Oct 23 01:12 UTC |
	| start   | -p mount-start-2-632991                           | mount-start-2-632991 | jenkins | v1.31.2 | 26 Oct 23 01:12 UTC | 26 Oct 23 01:12 UTC |
	| ssh     | mount-start-2-632991 ssh -- ls                    | mount-start-2-632991 | jenkins | v1.31.2 | 26 Oct 23 01:12 UTC | 26 Oct 23 01:12 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-632991                           | mount-start-2-632991 | jenkins | v1.31.2 | 26 Oct 23 01:12 UTC | 26 Oct 23 01:12 UTC |
	| delete  | -p mount-start-1-614829                           | mount-start-1-614829 | jenkins | v1.31.2 | 26 Oct 23 01:12 UTC | 26 Oct 23 01:12 UTC |
	| start   | -p multinode-204768                               | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:12 UTC | 26 Oct 23 01:14 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- apply -f                   | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- rollout                    | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- get pods -o                | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- get pods -o                | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- exec                       | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | busybox-5bc68d56bd-j4c2s --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- exec                       | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | busybox-5bc68d56bd-lvqzv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- exec                       | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | busybox-5bc68d56bd-j4c2s --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- exec                       | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | busybox-5bc68d56bd-lvqzv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- exec                       | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | busybox-5bc68d56bd-j4c2s -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- exec                       | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | busybox-5bc68d56bd-lvqzv -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- get pods -o                | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- exec                       | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | busybox-5bc68d56bd-j4c2s                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- exec                       | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC |                     |
	|         | busybox-5bc68d56bd-j4c2s -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- exec                       | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC | 26 Oct 23 01:14 UTC |
	|         | busybox-5bc68d56bd-lvqzv                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-204768 -- exec                       | multinode-204768     | jenkins | v1.31.2 | 26 Oct 23 01:14 UTC |                     |
	|         | busybox-5bc68d56bd-lvqzv -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/26 01:12:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 01:12:30.356654  104058 out.go:296] Setting OutFile to fd 1 ...
	I1026 01:12:30.356799  104058 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:12:30.356812  104058 out.go:309] Setting ErrFile to fd 2...
	I1026 01:12:30.356820  104058 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:12:30.357047  104058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	I1026 01:12:30.357634  104058 out.go:303] Setting JSON to false
	I1026 01:12:30.359084  104058 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3296,"bootTime":1698279454,"procs":880,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:12:30.359148  104058 start.go:138] virtualization: kvm guest
	I1026 01:12:30.361503  104058 out.go:177] * [multinode-204768] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:12:30.363081  104058 out.go:177]   - MINIKUBE_LOCATION=17491
	I1026 01:12:30.364544  104058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:12:30.363114  104058 notify.go:220] Checking for updates...
	I1026 01:12:30.366086  104058 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:12:30.367558  104058 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	I1026 01:12:30.368933  104058 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:12:30.370536  104058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:12:30.372107  104058 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 01:12:30.394916  104058 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1026 01:12:30.395042  104058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:12:30.449736  104058 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-26 01:12:30.440266741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:12:30.449849  104058 docker.go:295] overlay module found
	I1026 01:12:30.451845  104058 out.go:177] * Using the docker driver based on user configuration
	I1026 01:12:30.453379  104058 start.go:298] selected driver: docker
	I1026 01:12:30.453397  104058 start.go:902] validating driver "docker" against <nil>
	I1026 01:12:30.453408  104058 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:12:30.454260  104058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:12:30.506146  104058 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-26 01:12:30.497420762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:12:30.506333  104058 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1026 01:12:30.506519  104058 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:12:30.508368  104058 out.go:177] * Using Docker driver with root privileges
	I1026 01:12:30.510768  104058 cni.go:84] Creating CNI manager for ""
	I1026 01:12:30.510792  104058 cni.go:136] 0 nodes found, recommending kindnet
	I1026 01:12:30.510803  104058 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 01:12:30.510814  104058 start_flags.go:323] config:
	{Name:multinode-204768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-204768 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 01:12:30.512907  104058 out.go:177] * Starting control plane node multinode-204768 in cluster multinode-204768
	I1026 01:12:30.514461  104058 cache.go:121] Beginning downloading kic base image for docker with crio
	I1026 01:12:30.516178  104058 out.go:177] * Pulling base image ...
	I1026 01:12:30.517738  104058 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 01:12:30.517760  104058 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1026 01:12:30.517777  104058 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1026 01:12:30.517784  104058 cache.go:56] Caching tarball of preloaded images
	I1026 01:12:30.517878  104058 preload.go:174] Found /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:12:30.517892  104058 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1026 01:12:30.518274  104058 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/config.json ...
	I1026 01:12:30.518302  104058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/config.json: {Name:mkddd639fdba346af5b0900d066666017e7c0d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:12:30.533889  104058 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1026 01:12:30.533930  104058 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1026 01:12:30.533955  104058 cache.go:194] Successfully downloaded all kic artifacts
	I1026 01:12:30.533990  104058 start.go:365] acquiring machines lock for multinode-204768: {Name:mk90aad761abd5aeb7eab425ea57fda1a68cb426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:12:30.534084  104058 start.go:369] acquired machines lock for "multinode-204768" in 77.813µs
	I1026 01:12:30.534104  104058 start.go:93] Provisioning new machine with config: &{Name:multinode-204768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-204768 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:12:30.534175  104058 start.go:125] createHost starting for "" (driver="docker")
	I1026 01:12:30.537316  104058 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1026 01:12:30.537541  104058 start.go:159] libmachine.API.Create for "multinode-204768" (driver="docker")
	I1026 01:12:30.537574  104058 client.go:168] LocalClient.Create starting
	I1026 01:12:30.537660  104058 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem
	I1026 01:12:30.537711  104058 main.go:141] libmachine: Decoding PEM data...
	I1026 01:12:30.537731  104058 main.go:141] libmachine: Parsing certificate...
	I1026 01:12:30.537780  104058 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem
	I1026 01:12:30.537798  104058 main.go:141] libmachine: Decoding PEM data...
	I1026 01:12:30.537806  104058 main.go:141] libmachine: Parsing certificate...
	I1026 01:12:30.538118  104058 cli_runner.go:164] Run: docker network inspect multinode-204768 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 01:12:30.554454  104058 cli_runner.go:211] docker network inspect multinode-204768 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 01:12:30.554511  104058 network_create.go:281] running [docker network inspect multinode-204768] to gather additional debugging logs...
	I1026 01:12:30.554539  104058 cli_runner.go:164] Run: docker network inspect multinode-204768
	W1026 01:12:30.569998  104058 cli_runner.go:211] docker network inspect multinode-204768 returned with exit code 1
	I1026 01:12:30.570033  104058 network_create.go:284] error running [docker network inspect multinode-204768]: docker network inspect multinode-204768: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-204768 not found
	I1026 01:12:30.570045  104058 network_create.go:286] output of [docker network inspect multinode-204768]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-204768 not found
	
	** /stderr **
	I1026 01:12:30.570136  104058 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 01:12:30.586088  104058 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-05a98d7b2c42 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ef:15:c2:40} reservation:<nil>}
	I1026 01:12:30.586486  104058 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00285bf60}
	I1026 01:12:30.586539  104058 network_create.go:124] attempt to create docker network multinode-204768 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1026 01:12:30.586585  104058 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-204768 multinode-204768
	I1026 01:12:30.637767  104058 network_create.go:108] docker network multinode-204768 192.168.58.0/24 created
	I1026 01:12:30.637797  104058 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-204768" container
	I1026 01:12:30.637872  104058 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 01:12:30.653067  104058 cli_runner.go:164] Run: docker volume create multinode-204768 --label name.minikube.sigs.k8s.io=multinode-204768 --label created_by.minikube.sigs.k8s.io=true
	I1026 01:12:30.671113  104058 oci.go:103] Successfully created a docker volume multinode-204768
	I1026 01:12:30.671206  104058 cli_runner.go:164] Run: docker run --rm --name multinode-204768-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-204768 --entrypoint /usr/bin/test -v multinode-204768:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1026 01:12:31.189035  104058 oci.go:107] Successfully prepared a docker volume multinode-204768
	I1026 01:12:31.189099  104058 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 01:12:31.189122  104058 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 01:12:31.189204  104058 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-204768:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 01:12:36.340386  104058 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-204768:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.151089054s)
	I1026 01:12:36.340416  104058 kic.go:203] duration metric: took 5.151292 seconds to extract preloaded images to volume
	W1026 01:12:36.340545  104058 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 01:12:36.340640  104058 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 01:12:36.392729  104058 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-204768 --name multinode-204768 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-204768 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-204768 --network multinode-204768 --ip 192.168.58.2 --volume multinode-204768:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1026 01:12:36.696052  104058 cli_runner.go:164] Run: docker container inspect multinode-204768 --format={{.State.Running}}
	I1026 01:12:36.715183  104058 cli_runner.go:164] Run: docker container inspect multinode-204768 --format={{.State.Status}}
	I1026 01:12:36.733097  104058 cli_runner.go:164] Run: docker exec multinode-204768 stat /var/lib/dpkg/alternatives/iptables
	I1026 01:12:36.798651  104058 oci.go:144] the created container "multinode-204768" has a running status.
	I1026 01:12:36.798683  104058 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa...
	I1026 01:12:36.953430  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1026 01:12:36.953482  104058 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 01:12:36.972406  104058 cli_runner.go:164] Run: docker container inspect multinode-204768 --format={{.State.Status}}
	I1026 01:12:36.988150  104058 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 01:12:36.988170  104058 kic_runner.go:114] Args: [docker exec --privileged multinode-204768 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 01:12:37.038608  104058 cli_runner.go:164] Run: docker container inspect multinode-204768 --format={{.State.Status}}
	I1026 01:12:37.055452  104058 machine.go:88] provisioning docker machine ...
	I1026 01:12:37.055500  104058 ubuntu.go:169] provisioning hostname "multinode-204768"
	I1026 01:12:37.055569  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:12:37.072540  104058 main.go:141] libmachine: Using SSH client type: native
	I1026 01:12:37.072912  104058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I1026 01:12:37.072931  104058 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-204768 && echo "multinode-204768" | sudo tee /etc/hostname
	I1026 01:12:37.073557  104058 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57686->127.0.0.1:32849: read: connection reset by peer
	I1026 01:12:40.204259  104058 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-204768
	
	I1026 01:12:40.204331  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:12:40.220592  104058 main.go:141] libmachine: Using SSH client type: native
	I1026 01:12:40.221105  104058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I1026 01:12:40.221135  104058 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-204768' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-204768/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-204768' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:12:40.337719  104058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:12:40.337747  104058 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17491-8444/.minikube CaCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17491-8444/.minikube}
	I1026 01:12:40.337781  104058 ubuntu.go:177] setting up certificates
	I1026 01:12:40.337790  104058 provision.go:83] configureAuth start
	I1026 01:12:40.337834  104058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-204768
	I1026 01:12:40.356348  104058 provision.go:138] copyHostCerts
	I1026 01:12:40.356394  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem
	I1026 01:12:40.356453  104058 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem, removing ...
	I1026 01:12:40.356468  104058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem
	I1026 01:12:40.356547  104058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem (1078 bytes)
	I1026 01:12:40.356659  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem
	I1026 01:12:40.356701  104058 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem, removing ...
	I1026 01:12:40.356712  104058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem
	I1026 01:12:40.356761  104058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem (1123 bytes)
	I1026 01:12:40.356832  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem
	I1026 01:12:40.356863  104058 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem, removing ...
	I1026 01:12:40.356873  104058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem
	I1026 01:12:40.356913  104058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem (1675 bytes)
	I1026 01:12:40.356989  104058 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem org=jenkins.multinode-204768 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-204768]
	I1026 01:12:40.497917  104058 provision.go:172] copyRemoteCerts
	I1026 01:12:40.497987  104058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:12:40.498039  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:12:40.514881  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa Username:docker}
	I1026 01:12:40.606044  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:12:40.606110  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 01:12:40.627282  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:12:40.627342  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:12:40.648544  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:12:40.648614  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 01:12:40.669761  104058 provision.go:86] duration metric: configureAuth took 331.957671ms
	I1026 01:12:40.669787  104058 ubuntu.go:193] setting minikube options for container-runtime
	I1026 01:12:40.669958  104058 config.go:182] Loaded profile config "multinode-204768": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:12:40.670057  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:12:40.686702  104058 main.go:141] libmachine: Using SSH client type: native
	I1026 01:12:40.687035  104058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I1026 01:12:40.687056  104058 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:12:40.891230  104058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:12:40.891257  104058 machine.go:91] provisioned docker machine in 3.835782136s
	I1026 01:12:40.891268  104058 client.go:171] LocalClient.Create took 10.353683666s
	I1026 01:12:40.891292  104058 start.go:167] duration metric: libmachine.API.Create for "multinode-204768" took 10.353750723s
	I1026 01:12:40.891302  104058 start.go:300] post-start starting for "multinode-204768" (driver="docker")
	I1026 01:12:40.891314  104058 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:12:40.891376  104058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:12:40.891435  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:12:40.908113  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa Username:docker}
	I1026 01:12:40.998449  104058 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:12:41.001506  104058 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1026 01:12:41.001527  104058 command_runner.go:130] > NAME="Ubuntu"
	I1026 01:12:41.001534  104058 command_runner.go:130] > VERSION_ID="22.04"
	I1026 01:12:41.001540  104058 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1026 01:12:41.001545  104058 command_runner.go:130] > VERSION_CODENAME=jammy
	I1026 01:12:41.001548  104058 command_runner.go:130] > ID=ubuntu
	I1026 01:12:41.001552  104058 command_runner.go:130] > ID_LIKE=debian
	I1026 01:12:41.001557  104058 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1026 01:12:41.001562  104058 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1026 01:12:41.001567  104058 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1026 01:12:41.001574  104058 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1026 01:12:41.001582  104058 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1026 01:12:41.001627  104058 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 01:12:41.001651  104058 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1026 01:12:41.001666  104058 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1026 01:12:41.001710  104058 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1026 01:12:41.001725  104058 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/addons for local assets ...
	I1026 01:12:41.001775  104058 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/files for local assets ...
	I1026 01:12:41.001868  104058 filesync.go:149] local asset: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem -> 152462.pem in /etc/ssl/certs
	I1026 01:12:41.001879  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem -> /etc/ssl/certs/152462.pem
	I1026 01:12:41.001966  104058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:12:41.009695  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem --> /etc/ssl/certs/152462.pem (1708 bytes)
	I1026 01:12:41.031929  104058 start.go:303] post-start completed in 140.614239ms
	I1026 01:12:41.032291  104058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-204768
	I1026 01:12:41.048584  104058 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/config.json ...
	I1026 01:12:41.048865  104058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:12:41.048905  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:12:41.065303  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa Username:docker}
	I1026 01:12:41.150284  104058 command_runner.go:130] > 20%!
	(MISSING)I1026 01:12:41.150358  104058 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 01:12:41.154720  104058 command_runner.go:130] > 235G
	I1026 01:12:41.154757  104058 start.go:128] duration metric: createHost completed in 10.620572703s
	I1026 01:12:41.154768  104058 start.go:83] releasing machines lock for "multinode-204768", held for 10.620673226s
	I1026 01:12:41.154844  104058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-204768
	I1026 01:12:41.171217  104058 ssh_runner.go:195] Run: cat /version.json
	I1026 01:12:41.171265  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:12:41.171301  104058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:12:41.171377  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:12:41.189063  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa Username:docker}
	I1026 01:12:41.189417  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa Username:docker}
	I1026 01:12:41.362048  104058 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1026 01:12:41.364279  104058 command_runner.go:130] > {"iso_version": "v1.31.0-1697471113-17434", "kicbase_version": "v0.0.40-1698055645-17423", "minikube_version": "v1.31.2", "commit": "585245745aba695f9444ad633713942a6eacd882"}
	I1026 01:12:41.364425  104058 ssh_runner.go:195] Run: systemctl --version
	I1026 01:12:41.368493  104058 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1026 01:12:41.368520  104058 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1026 01:12:41.368569  104058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:12:41.504580  104058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 01:12:41.508418  104058 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1026 01:12:41.508436  104058 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1026 01:12:41.508442  104058 command_runner.go:130] > Device: 36h/54d	Inode: 800898      Links: 1
	I1026 01:12:41.508448  104058 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1026 01:12:41.508460  104058 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1026 01:12:41.508465  104058 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1026 01:12:41.508470  104058 command_runner.go:130] > Change: 2023-10-26 00:53:54.215199380 +0000
	I1026 01:12:41.508481  104058 command_runner.go:130] >  Birth: 2023-10-26 00:53:54.215199380 +0000
	I1026 01:12:41.508634  104058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:12:41.525628  104058 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1026 01:12:41.525730  104058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:12:41.552595  104058 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1026 01:12:41.552647  104058 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1026 01:12:41.552654  104058 start.go:472] detecting cgroup driver to use...
	I1026 01:12:41.552684  104058 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1026 01:12:41.552737  104058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:12:41.565999  104058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:12:41.575918  104058 docker.go:198] disabling cri-docker service (if available) ...
	I1026 01:12:41.575980  104058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:12:41.587387  104058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:12:41.599589  104058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:12:41.674122  104058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:12:41.754439  104058 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1026 01:12:41.754484  104058 docker.go:214] disabling docker service ...
	I1026 01:12:41.754523  104058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:12:41.771940  104058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:12:41.782097  104058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:12:41.859027  104058 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1026 01:12:41.859088  104058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:12:41.869568  104058 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1026 01:12:41.939117  104058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:12:41.949160  104058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:12:41.962412  104058 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1026 01:12:41.963155  104058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 01:12:41.963315  104058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:12:41.972301  104058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:12:41.972348  104058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:12:41.981095  104058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:12:41.989791  104058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:12:41.998337  104058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:12:42.006309  104058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:12:42.013321  104058 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1026 01:12:42.013395  104058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:12:42.020452  104058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:12:42.089982  104058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:12:42.179097  104058 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:12:42.179170  104058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:12:42.182435  104058 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1026 01:12:42.182465  104058 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1026 01:12:42.182476  104058 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1026 01:12:42.182485  104058 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1026 01:12:42.182494  104058 command_runner.go:130] > Access: 2023-10-26 01:12:42.167434124 +0000
	I1026 01:12:42.182510  104058 command_runner.go:130] > Modify: 2023-10-26 01:12:42.167434124 +0000
	I1026 01:12:42.182517  104058 command_runner.go:130] > Change: 2023-10-26 01:12:42.167434124 +0000
	I1026 01:12:42.182521  104058 command_runner.go:130] >  Birth: -
	I1026 01:12:42.182539  104058 start.go:540] Will wait 60s for crictl version
	I1026 01:12:42.182574  104058 ssh_runner.go:195] Run: which crictl
	I1026 01:12:42.185567  104058 command_runner.go:130] > /usr/bin/crictl
	I1026 01:12:42.185620  104058 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:12:42.215012  104058 command_runner.go:130] > Version:  0.1.0
	I1026 01:12:42.215037  104058 command_runner.go:130] > RuntimeName:  cri-o
	I1026 01:12:42.215054  104058 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1026 01:12:42.215063  104058 command_runner.go:130] > RuntimeApiVersion:  v1
	I1026 01:12:42.217102  104058 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1026 01:12:42.217182  104058 ssh_runner.go:195] Run: crio --version
	I1026 01:12:42.249706  104058 command_runner.go:130] > crio version 1.24.6
	I1026 01:12:42.249728  104058 command_runner.go:130] > Version:          1.24.6
	I1026 01:12:42.249735  104058 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1026 01:12:42.249740  104058 command_runner.go:130] > GitTreeState:     clean
	I1026 01:12:42.249746  104058 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1026 01:12:42.249750  104058 command_runner.go:130] > GoVersion:        go1.18.2
	I1026 01:12:42.249754  104058 command_runner.go:130] > Compiler:         gc
	I1026 01:12:42.249759  104058 command_runner.go:130] > Platform:         linux/amd64
	I1026 01:12:42.249764  104058 command_runner.go:130] > Linkmode:         dynamic
	I1026 01:12:42.249772  104058 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1026 01:12:42.249776  104058 command_runner.go:130] > SeccompEnabled:   true
	I1026 01:12:42.249783  104058 command_runner.go:130] > AppArmorEnabled:  false
	I1026 01:12:42.249889  104058 ssh_runner.go:195] Run: crio --version
	I1026 01:12:42.285010  104058 command_runner.go:130] > crio version 1.24.6
	I1026 01:12:42.285031  104058 command_runner.go:130] > Version:          1.24.6
	I1026 01:12:42.285038  104058 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1026 01:12:42.285042  104058 command_runner.go:130] > GitTreeState:     clean
	I1026 01:12:42.285047  104058 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1026 01:12:42.285052  104058 command_runner.go:130] > GoVersion:        go1.18.2
	I1026 01:12:42.285057  104058 command_runner.go:130] > Compiler:         gc
	I1026 01:12:42.285076  104058 command_runner.go:130] > Platform:         linux/amd64
	I1026 01:12:42.285085  104058 command_runner.go:130] > Linkmode:         dynamic
	I1026 01:12:42.285096  104058 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1026 01:12:42.285101  104058 command_runner.go:130] > SeccompEnabled:   true
	I1026 01:12:42.285105  104058 command_runner.go:130] > AppArmorEnabled:  false
	I1026 01:12:42.287346  104058 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1026 01:12:42.288896  104058 cli_runner.go:164] Run: docker network inspect multinode-204768 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 01:12:42.305504  104058 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1026 01:12:42.308911  104058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:12:42.318643  104058 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 01:12:42.318700  104058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:12:42.369743  104058 command_runner.go:130] > {
	I1026 01:12:42.369767  104058 command_runner.go:130] >   "images": [
	I1026 01:12:42.369773  104058 command_runner.go:130] >     {
	I1026 01:12:42.369787  104058 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1026 01:12:42.369795  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.369805  104058 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1026 01:12:42.369811  104058 command_runner.go:130] >       ],
	I1026 01:12:42.369820  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.369845  104058 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1026 01:12:42.369879  104058 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1026 01:12:42.369889  104058 command_runner.go:130] >       ],
	I1026 01:12:42.369897  104058 command_runner.go:130] >       "size": "65258016",
	I1026 01:12:42.369907  104058 command_runner.go:130] >       "uid": null,
	I1026 01:12:42.369915  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.369926  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.369937  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.369949  104058 command_runner.go:130] >     },
	I1026 01:12:42.369958  104058 command_runner.go:130] >     {
	I1026 01:12:42.369969  104058 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1026 01:12:42.369979  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.369992  104058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1026 01:12:42.369998  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370006  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.370023  104058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1026 01:12:42.370039  104058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1026 01:12:42.370049  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370063  104058 command_runner.go:130] >       "size": "31470524",
	I1026 01:12:42.370073  104058 command_runner.go:130] >       "uid": null,
	I1026 01:12:42.370080  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.370090  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.370100  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.370110  104058 command_runner.go:130] >     },
	I1026 01:12:42.370116  104058 command_runner.go:130] >     {
	I1026 01:12:42.370130  104058 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1026 01:12:42.370143  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.370156  104058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1026 01:12:42.370165  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370172  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.370189  104058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1026 01:12:42.370204  104058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1026 01:12:42.370213  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370221  104058 command_runner.go:130] >       "size": "53621675",
	I1026 01:12:42.370231  104058 command_runner.go:130] >       "uid": null,
	I1026 01:12:42.370241  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.370251  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.370258  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.370268  104058 command_runner.go:130] >     },
	I1026 01:12:42.370280  104058 command_runner.go:130] >     {
	I1026 01:12:42.370293  104058 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1026 01:12:42.370302  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.370310  104058 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1026 01:12:42.370318  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370328  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.370342  104058 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1026 01:12:42.370356  104058 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1026 01:12:42.370372  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370382  104058 command_runner.go:130] >       "size": "295456551",
	I1026 01:12:42.370390  104058 command_runner.go:130] >       "uid": {
	I1026 01:12:42.370399  104058 command_runner.go:130] >         "value": "0"
	I1026 01:12:42.370405  104058 command_runner.go:130] >       },
	I1026 01:12:42.370414  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.370420  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.370430  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.370438  104058 command_runner.go:130] >     },
	I1026 01:12:42.370444  104058 command_runner.go:130] >     {
	I1026 01:12:42.370456  104058 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1026 01:12:42.370465  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.370472  104058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1026 01:12:42.370481  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370487  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.370511  104058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1026 01:12:42.370525  104058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1026 01:12:42.370534  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370541  104058 command_runner.go:130] >       "size": "127165392",
	I1026 01:12:42.370550  104058 command_runner.go:130] >       "uid": {
	I1026 01:12:42.370556  104058 command_runner.go:130] >         "value": "0"
	I1026 01:12:42.370565  104058 command_runner.go:130] >       },
	I1026 01:12:42.370572  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.370581  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.370588  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.370596  104058 command_runner.go:130] >     },
	I1026 01:12:42.370602  104058 command_runner.go:130] >     {
	I1026 01:12:42.370614  104058 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1026 01:12:42.370624  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.370632  104058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1026 01:12:42.370640  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370647  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.370663  104058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1026 01:12:42.370681  104058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1026 01:12:42.370691  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370702  104058 command_runner.go:130] >       "size": "123188534",
	I1026 01:12:42.370711  104058 command_runner.go:130] >       "uid": {
	I1026 01:12:42.370720  104058 command_runner.go:130] >         "value": "0"
	I1026 01:12:42.370728  104058 command_runner.go:130] >       },
	I1026 01:12:42.370734  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.370744  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.370751  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.370759  104058 command_runner.go:130] >     },
	I1026 01:12:42.370764  104058 command_runner.go:130] >     {
	I1026 01:12:42.370773  104058 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1026 01:12:42.370783  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.370791  104058 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1026 01:12:42.370800  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370808  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.370822  104058 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1026 01:12:42.370836  104058 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1026 01:12:42.370847  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370857  104058 command_runner.go:130] >       "size": "74691991",
	I1026 01:12:42.370871  104058 command_runner.go:130] >       "uid": null,
	I1026 01:12:42.370882  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.370888  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.370897  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.370903  104058 command_runner.go:130] >     },
	I1026 01:12:42.370912  104058 command_runner.go:130] >     {
	I1026 01:12:42.370923  104058 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1026 01:12:42.370933  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.370943  104058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1026 01:12:42.370952  104058 command_runner.go:130] >       ],
	I1026 01:12:42.370963  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.371029  104058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1026 01:12:42.371047  104058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1026 01:12:42.371053  104058 command_runner.go:130] >       ],
	I1026 01:12:42.371059  104058 command_runner.go:130] >       "size": "61498678",
	I1026 01:12:42.371065  104058 command_runner.go:130] >       "uid": {
	I1026 01:12:42.371076  104058 command_runner.go:130] >         "value": "0"
	I1026 01:12:42.371084  104058 command_runner.go:130] >       },
	I1026 01:12:42.371090  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.371099  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.371106  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.371115  104058 command_runner.go:130] >     },
	I1026 01:12:42.371121  104058 command_runner.go:130] >     {
	I1026 01:12:42.371137  104058 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1026 01:12:42.371148  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.371156  104058 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1026 01:12:42.371164  104058 command_runner.go:130] >       ],
	I1026 01:12:42.371171  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.371184  104058 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1026 01:12:42.371198  104058 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1026 01:12:42.371206  104058 command_runner.go:130] >       ],
	I1026 01:12:42.371215  104058 command_runner.go:130] >       "size": "750414",
	I1026 01:12:42.371221  104058 command_runner.go:130] >       "uid": {
	I1026 01:12:42.371230  104058 command_runner.go:130] >         "value": "65535"
	I1026 01:12:42.371237  104058 command_runner.go:130] >       },
	I1026 01:12:42.371246  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.371252  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.371262  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.371268  104058 command_runner.go:130] >     }
	I1026 01:12:42.371277  104058 command_runner.go:130] >   ]
	I1026 01:12:42.371283  104058 command_runner.go:130] > }
	I1026 01:12:42.372724  104058 crio.go:496] all images are preloaded for cri-o runtime.
	I1026 01:12:42.372749  104058 crio.go:415] Images already preloaded, skipping extraction
	I1026 01:12:42.372798  104058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:12:42.402520  104058 command_runner.go:130] > {
	I1026 01:12:42.402542  104058 command_runner.go:130] >   "images": [
	I1026 01:12:42.402549  104058 command_runner.go:130] >     {
	I1026 01:12:42.402561  104058 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1026 01:12:42.402569  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.402594  104058 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1026 01:12:42.402609  104058 command_runner.go:130] >       ],
	I1026 01:12:42.402617  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.402631  104058 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1026 01:12:42.402647  104058 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1026 01:12:42.402656  104058 command_runner.go:130] >       ],
	I1026 01:12:42.402667  104058 command_runner.go:130] >       "size": "65258016",
	I1026 01:12:42.402674  104058 command_runner.go:130] >       "uid": null,
	I1026 01:12:42.402685  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.402708  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.402718  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.402727  104058 command_runner.go:130] >     },
	I1026 01:12:42.402737  104058 command_runner.go:130] >     {
	I1026 01:12:42.402748  104058 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1026 01:12:42.402755  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.402764  104058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1026 01:12:42.402770  104058 command_runner.go:130] >       ],
	I1026 01:12:42.402778  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.402791  104058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1026 01:12:42.402805  104058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1026 01:12:42.402811  104058 command_runner.go:130] >       ],
	I1026 01:12:42.402842  104058 command_runner.go:130] >       "size": "31470524",
	I1026 01:12:42.402851  104058 command_runner.go:130] >       "uid": null,
	I1026 01:12:42.402862  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.402875  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.402884  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.402890  104058 command_runner.go:130] >     },
	I1026 01:12:42.402898  104058 command_runner.go:130] >     {
	I1026 01:12:42.402911  104058 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1026 01:12:42.402920  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.402952  104058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1026 01:12:42.402957  104058 command_runner.go:130] >       ],
	I1026 01:12:42.402963  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.402977  104058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1026 01:12:42.402991  104058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1026 01:12:42.402999  104058 command_runner.go:130] >       ],
	I1026 01:12:42.403009  104058 command_runner.go:130] >       "size": "53621675",
	I1026 01:12:42.403017  104058 command_runner.go:130] >       "uid": null,
	I1026 01:12:42.403027  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.403036  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.403045  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.403054  104058 command_runner.go:130] >     },
	I1026 01:12:42.403059  104058 command_runner.go:130] >     {
	I1026 01:12:42.403071  104058 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1026 01:12:42.403081  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.403096  104058 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1026 01:12:42.403104  104058 command_runner.go:130] >       ],
	I1026 01:12:42.403114  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.403127  104058 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1026 01:12:42.403142  104058 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1026 01:12:42.403158  104058 command_runner.go:130] >       ],
	I1026 01:12:42.403168  104058 command_runner.go:130] >       "size": "295456551",
	I1026 01:12:42.403177  104058 command_runner.go:130] >       "uid": {
	I1026 01:12:42.403186  104058 command_runner.go:130] >         "value": "0"
	I1026 01:12:42.403195  104058 command_runner.go:130] >       },
	I1026 01:12:42.403205  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.403213  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.403222  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.403231  104058 command_runner.go:130] >     },
	I1026 01:12:42.403240  104058 command_runner.go:130] >     {
	I1026 01:12:42.403252  104058 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1026 01:12:42.403263  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.403275  104058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1026 01:12:42.403289  104058 command_runner.go:130] >       ],
	I1026 01:12:42.403298  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.403309  104058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1026 01:12:42.403323  104058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1026 01:12:42.403331  104058 command_runner.go:130] >       ],
	I1026 01:12:42.403337  104058 command_runner.go:130] >       "size": "127165392",
	I1026 01:12:42.403346  104058 command_runner.go:130] >       "uid": {
	I1026 01:12:42.403353  104058 command_runner.go:130] >         "value": "0"
	I1026 01:12:42.403361  104058 command_runner.go:130] >       },
	I1026 01:12:42.403371  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.403380  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.403387  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.403395  104058 command_runner.go:130] >     },
	I1026 01:12:42.403404  104058 command_runner.go:130] >     {
	I1026 01:12:42.403416  104058 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1026 01:12:42.403427  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.403439  104058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1026 01:12:42.403447  104058 command_runner.go:130] >       ],
	I1026 01:12:42.403461  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.403478  104058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1026 01:12:42.403495  104058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1026 01:12:42.403504  104058 command_runner.go:130] >       ],
	I1026 01:12:42.403514  104058 command_runner.go:130] >       "size": "123188534",
	I1026 01:12:42.403524  104058 command_runner.go:130] >       "uid": {
	I1026 01:12:42.403533  104058 command_runner.go:130] >         "value": "0"
	I1026 01:12:42.403540  104058 command_runner.go:130] >       },
	I1026 01:12:42.403550  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.403560  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.403567  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.403577  104058 command_runner.go:130] >     },
	I1026 01:12:42.403586  104058 command_runner.go:130] >     {
	I1026 01:12:42.403600  104058 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1026 01:12:42.403610  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.403622  104058 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1026 01:12:42.403631  104058 command_runner.go:130] >       ],
	I1026 01:12:42.403642  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.403657  104058 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1026 01:12:42.403672  104058 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1026 01:12:42.403682  104058 command_runner.go:130] >       ],
	I1026 01:12:42.403692  104058 command_runner.go:130] >       "size": "74691991",
	I1026 01:12:42.403700  104058 command_runner.go:130] >       "uid": null,
	I1026 01:12:42.403709  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.403718  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.403727  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.403733  104058 command_runner.go:130] >     },
	I1026 01:12:42.403741  104058 command_runner.go:130] >     {
	I1026 01:12:42.403756  104058 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1026 01:12:42.403766  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.403777  104058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1026 01:12:42.403786  104058 command_runner.go:130] >       ],
	I1026 01:12:42.403793  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.403869  104058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1026 01:12:42.403888  104058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1026 01:12:42.403895  104058 command_runner.go:130] >       ],
	I1026 01:12:42.403906  104058 command_runner.go:130] >       "size": "61498678",
	I1026 01:12:42.403912  104058 command_runner.go:130] >       "uid": {
	I1026 01:12:42.403921  104058 command_runner.go:130] >         "value": "0"
	I1026 01:12:42.403936  104058 command_runner.go:130] >       },
	I1026 01:12:42.403941  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.403947  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.403952  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.403960  104058 command_runner.go:130] >     },
	I1026 01:12:42.403969  104058 command_runner.go:130] >     {
	I1026 01:12:42.403980  104058 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1026 01:12:42.403990  104058 command_runner.go:130] >       "repoTags": [
	I1026 01:12:42.404001  104058 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1026 01:12:42.404005  104058 command_runner.go:130] >       ],
	I1026 01:12:42.404011  104058 command_runner.go:130] >       "repoDigests": [
	I1026 01:12:42.404018  104058 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1026 01:12:42.404028  104058 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1026 01:12:42.404033  104058 command_runner.go:130] >       ],
	I1026 01:12:42.404038  104058 command_runner.go:130] >       "size": "750414",
	I1026 01:12:42.404047  104058 command_runner.go:130] >       "uid": {
	I1026 01:12:42.404053  104058 command_runner.go:130] >         "value": "65535"
	I1026 01:12:42.404057  104058 command_runner.go:130] >       },
	I1026 01:12:42.404064  104058 command_runner.go:130] >       "username": "",
	I1026 01:12:42.404068  104058 command_runner.go:130] >       "spec": null,
	I1026 01:12:42.404075  104058 command_runner.go:130] >       "pinned": false
	I1026 01:12:42.404078  104058 command_runner.go:130] >     }
	I1026 01:12:42.404084  104058 command_runner.go:130] >   ]
	I1026 01:12:42.404087  104058 command_runner.go:130] > }
	I1026 01:12:42.404979  104058 crio.go:496] all images are preloaded for cri-o runtime.
	I1026 01:12:42.404995  104058 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:12:42.405076  104058 ssh_runner.go:195] Run: crio config
	I1026 01:12:42.440179  104058 command_runner.go:130] ! time="2023-10-26 01:12:42.439757945Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1026 01:12:42.440218  104058 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1026 01:12:42.444270  104058 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1026 01:12:42.444294  104058 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1026 01:12:42.444302  104058 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1026 01:12:42.444308  104058 command_runner.go:130] > #
	I1026 01:12:42.444318  104058 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1026 01:12:42.444328  104058 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1026 01:12:42.444343  104058 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1026 01:12:42.444358  104058 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1026 01:12:42.444367  104058 command_runner.go:130] > # reload'.
	I1026 01:12:42.444377  104058 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1026 01:12:42.444385  104058 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1026 01:12:42.444396  104058 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1026 01:12:42.444404  104058 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1026 01:12:42.444410  104058 command_runner.go:130] > [crio]
	I1026 01:12:42.444416  104058 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1026 01:12:42.444426  104058 command_runner.go:130] > # containers images, in this directory.
	I1026 01:12:42.444443  104058 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1026 01:12:42.444458  104058 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1026 01:12:42.444471  104058 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1026 01:12:42.444482  104058 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1026 01:12:42.444491  104058 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1026 01:12:42.444498  104058 command_runner.go:130] > # storage_driver = "vfs"
	I1026 01:12:42.444504  104058 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1026 01:12:42.444512  104058 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1026 01:12:42.444522  104058 command_runner.go:130] > # storage_option = [
	I1026 01:12:42.444528  104058 command_runner.go:130] > # ]
	I1026 01:12:42.444538  104058 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1026 01:12:42.444556  104058 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1026 01:12:42.444568  104058 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1026 01:12:42.444582  104058 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1026 01:12:42.444591  104058 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1026 01:12:42.444598  104058 command_runner.go:130] > # always happen on a node reboot
	I1026 01:12:42.444603  104058 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1026 01:12:42.444611  104058 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1026 01:12:42.444621  104058 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1026 01:12:42.444641  104058 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1026 01:12:42.444653  104058 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1026 01:12:42.444664  104058 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1026 01:12:42.444677  104058 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1026 01:12:42.444688  104058 command_runner.go:130] > # internal_wipe = true
	I1026 01:12:42.444700  104058 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1026 01:12:42.444709  104058 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1026 01:12:42.444717  104058 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1026 01:12:42.444733  104058 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1026 01:12:42.444748  104058 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1026 01:12:42.444753  104058 command_runner.go:130] > [crio.api]
	I1026 01:12:42.444761  104058 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1026 01:12:42.444774  104058 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1026 01:12:42.444784  104058 command_runner.go:130] > # IP address on which the stream server will listen.
	I1026 01:12:42.444791  104058 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1026 01:12:42.444802  104058 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1026 01:12:42.444815  104058 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1026 01:12:42.444824  104058 command_runner.go:130] > # stream_port = "0"
	I1026 01:12:42.444832  104058 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1026 01:12:42.444841  104058 command_runner.go:130] > # stream_enable_tls = false
	I1026 01:12:42.444851  104058 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1026 01:12:42.444860  104058 command_runner.go:130] > # stream_idle_timeout = ""
	I1026 01:12:42.444870  104058 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1026 01:12:42.444882  104058 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1026 01:12:42.444888  104058 command_runner.go:130] > # minutes.
	I1026 01:12:42.444894  104058 command_runner.go:130] > # stream_tls_cert = ""
	I1026 01:12:42.444908  104058 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1026 01:12:42.444921  104058 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1026 01:12:42.444931  104058 command_runner.go:130] > # stream_tls_key = ""
	I1026 01:12:42.444940  104058 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1026 01:12:42.444957  104058 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1026 01:12:42.444968  104058 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1026 01:12:42.444975  104058 command_runner.go:130] > # stream_tls_ca = ""
	I1026 01:12:42.444987  104058 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1026 01:12:42.444995  104058 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1026 01:12:42.445006  104058 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1026 01:12:42.445017  104058 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1026 01:12:42.445053  104058 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1026 01:12:42.445065  104058 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1026 01:12:42.445072  104058 command_runner.go:130] > [crio.runtime]
	I1026 01:12:42.445081  104058 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1026 01:12:42.445092  104058 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1026 01:12:42.445098  104058 command_runner.go:130] > # "nofile=1024:2048"
	I1026 01:12:42.445111  104058 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1026 01:12:42.445120  104058 command_runner.go:130] > # default_ulimits = [
	I1026 01:12:42.445129  104058 command_runner.go:130] > # ]
	I1026 01:12:42.445142  104058 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1026 01:12:42.445151  104058 command_runner.go:130] > # no_pivot = false
	I1026 01:12:42.445163  104058 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1026 01:12:42.445179  104058 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1026 01:12:42.445190  104058 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1026 01:12:42.445199  104058 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1026 01:12:42.445210  104058 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1026 01:12:42.445223  104058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1026 01:12:42.445236  104058 command_runner.go:130] > # conmon = ""
	I1026 01:12:42.445246  104058 command_runner.go:130] > # Cgroup setting for conmon
	I1026 01:12:42.445260  104058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1026 01:12:42.445270  104058 command_runner.go:130] > conmon_cgroup = "pod"
	I1026 01:12:42.445279  104058 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1026 01:12:42.445287  104058 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1026 01:12:42.445300  104058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1026 01:12:42.445309  104058 command_runner.go:130] > # conmon_env = [
	I1026 01:12:42.445318  104058 command_runner.go:130] > # ]
	I1026 01:12:42.445328  104058 command_runner.go:130] > # Additional environment variables to set for all the
	I1026 01:12:42.445339  104058 command_runner.go:130] > # containers. These are overridden if set in the
	I1026 01:12:42.445350  104058 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1026 01:12:42.445363  104058 command_runner.go:130] > # default_env = [
	I1026 01:12:42.445371  104058 command_runner.go:130] > # ]
	I1026 01:12:42.445380  104058 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1026 01:12:42.445386  104058 command_runner.go:130] > # selinux = false
	I1026 01:12:42.445398  104058 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1026 01:12:42.445408  104058 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1026 01:12:42.445419  104058 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1026 01:12:42.445426  104058 command_runner.go:130] > # seccomp_profile = ""
	I1026 01:12:42.445437  104058 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1026 01:12:42.445446  104058 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1026 01:12:42.445458  104058 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1026 01:12:42.445468  104058 command_runner.go:130] > # which might increase security.
	I1026 01:12:42.445475  104058 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1026 01:12:42.445487  104058 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1026 01:12:42.445496  104058 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1026 01:12:42.445508  104058 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1026 01:12:42.445527  104058 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1026 01:12:42.445537  104058 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:12:42.445550  104058 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1026 01:12:42.445561  104058 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1026 01:12:42.445570  104058 command_runner.go:130] > # the cgroup blockio controller.
	I1026 01:12:42.445576  104058 command_runner.go:130] > # blockio_config_file = ""
	I1026 01:12:42.445586  104058 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1026 01:12:42.445594  104058 command_runner.go:130] > # irqbalance daemon.
	I1026 01:12:42.445602  104058 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1026 01:12:42.445614  104058 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1026 01:12:42.445624  104058 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:12:42.445631  104058 command_runner.go:130] > # rdt_config_file = ""
	I1026 01:12:42.445641  104058 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1026 01:12:42.445648  104058 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1026 01:12:42.445657  104058 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1026 01:12:42.445666  104058 command_runner.go:130] > # separate_pull_cgroup = ""
	I1026 01:12:42.445692  104058 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1026 01:12:42.445707  104058 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1026 01:12:42.445713  104058 command_runner.go:130] > # will be added.
	I1026 01:12:42.445723  104058 command_runner.go:130] > # default_capabilities = [
	I1026 01:12:42.445740  104058 command_runner.go:130] > # 	"CHOWN",
	I1026 01:12:42.445749  104058 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1026 01:12:42.445754  104058 command_runner.go:130] > # 	"FSETID",
	I1026 01:12:42.445762  104058 command_runner.go:130] > # 	"FOWNER",
	I1026 01:12:42.445768  104058 command_runner.go:130] > # 	"SETGID",
	I1026 01:12:42.445776  104058 command_runner.go:130] > # 	"SETUID",
	I1026 01:12:42.445782  104058 command_runner.go:130] > # 	"SETPCAP",
	I1026 01:12:42.445791  104058 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1026 01:12:42.445797  104058 command_runner.go:130] > # 	"KILL",
	I1026 01:12:42.445802  104058 command_runner.go:130] > # ]
	I1026 01:12:42.445813  104058 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1026 01:12:42.445827  104058 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1026 01:12:42.445839  104058 command_runner.go:130] > # add_inheritable_capabilities = true
	I1026 01:12:42.445850  104058 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1026 01:12:42.445863  104058 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1026 01:12:42.445872  104058 command_runner.go:130] > # default_sysctls = [
	I1026 01:12:42.445877  104058 command_runner.go:130] > # ]
	I1026 01:12:42.445889  104058 command_runner.go:130] > # List of devices on the host that a
	I1026 01:12:42.445897  104058 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1026 01:12:42.445904  104058 command_runner.go:130] > # allowed_devices = [
	I1026 01:12:42.445911  104058 command_runner.go:130] > # 	"/dev/fuse",
	I1026 01:12:42.445916  104058 command_runner.go:130] > # ]
	I1026 01:12:42.445930  104058 command_runner.go:130] > # List of additional devices. specified as
	I1026 01:12:42.445986  104058 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1026 01:12:42.446003  104058 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1026 01:12:42.446011  104058 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1026 01:12:42.446021  104058 command_runner.go:130] > # additional_devices = [
	I1026 01:12:42.446028  104058 command_runner.go:130] > # ]
	I1026 01:12:42.446036  104058 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1026 01:12:42.446045  104058 command_runner.go:130] > # cdi_spec_dirs = [
	I1026 01:12:42.446051  104058 command_runner.go:130] > # 	"/etc/cdi",
	I1026 01:12:42.446062  104058 command_runner.go:130] > # 	"/var/run/cdi",
	I1026 01:12:42.446068  104058 command_runner.go:130] > # ]
	I1026 01:12:42.446079  104058 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1026 01:12:42.446093  104058 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1026 01:12:42.446102  104058 command_runner.go:130] > # Defaults to false.
	I1026 01:12:42.446118  104058 command_runner.go:130] > # device_ownership_from_security_context = false
	I1026 01:12:42.446132  104058 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1026 01:12:42.446144  104058 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1026 01:12:42.446150  104058 command_runner.go:130] > # hooks_dir = [
	I1026 01:12:42.446160  104058 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1026 01:12:42.446169  104058 command_runner.go:130] > # ]
	I1026 01:12:42.446178  104058 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1026 01:12:42.446191  104058 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1026 01:12:42.446202  104058 command_runner.go:130] > # its default mounts from the following two files:
	I1026 01:12:42.446210  104058 command_runner.go:130] > #
	I1026 01:12:42.446223  104058 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1026 01:12:42.446236  104058 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1026 01:12:42.446245  104058 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1026 01:12:42.446253  104058 command_runner.go:130] > #
	I1026 01:12:42.446262  104058 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1026 01:12:42.446275  104058 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1026 01:12:42.446290  104058 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1026 01:12:42.446308  104058 command_runner.go:130] > #      only add mounts it finds in this file.
	I1026 01:12:42.446319  104058 command_runner.go:130] > #
	I1026 01:12:42.446326  104058 command_runner.go:130] > # default_mounts_file = ""
	I1026 01:12:42.446343  104058 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1026 01:12:42.446353  104058 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1026 01:12:42.446361  104058 command_runner.go:130] > # pids_limit = 0
	I1026 01:12:42.446369  104058 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1026 01:12:42.446379  104058 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1026 01:12:42.446389  104058 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1026 01:12:42.446404  104058 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1026 01:12:42.446413  104058 command_runner.go:130] > # log_size_max = -1
	I1026 01:12:42.446424  104058 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1026 01:12:42.446435  104058 command_runner.go:130] > # log_to_journald = false
	I1026 01:12:42.446443  104058 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1026 01:12:42.446451  104058 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1026 01:12:42.446461  104058 command_runner.go:130] > # Path to directory for container attach sockets.
	I1026 01:12:42.446473  104058 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1026 01:12:42.446487  104058 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1026 01:12:42.446498  104058 command_runner.go:130] > # bind_mount_prefix = ""
	I1026 01:12:42.446528  104058 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1026 01:12:42.446539  104058 command_runner.go:130] > # read_only = false
	I1026 01:12:42.446552  104058 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1026 01:12:42.446566  104058 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1026 01:12:42.446577  104058 command_runner.go:130] > # live configuration reload.
	I1026 01:12:42.446588  104058 command_runner.go:130] > # log_level = "info"
	I1026 01:12:42.446602  104058 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1026 01:12:42.446614  104058 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:12:42.446629  104058 command_runner.go:130] > # log_filter = ""
	I1026 01:12:42.446642  104058 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1026 01:12:42.446656  104058 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1026 01:12:42.446667  104058 command_runner.go:130] > # separated by comma.
	I1026 01:12:42.446678  104058 command_runner.go:130] > # uid_mappings = ""
	I1026 01:12:42.446693  104058 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1026 01:12:42.446707  104058 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1026 01:12:42.446717  104058 command_runner.go:130] > # separated by comma.
	I1026 01:12:42.446725  104058 command_runner.go:130] > # gid_mappings = ""
	I1026 01:12:42.446739  104058 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1026 01:12:42.446756  104058 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1026 01:12:42.446770  104058 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1026 01:12:42.446782  104058 command_runner.go:130] > # minimum_mappable_uid = -1
	I1026 01:12:42.446797  104058 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1026 01:12:42.446810  104058 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1026 01:12:42.446820  104058 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1026 01:12:42.446831  104058 command_runner.go:130] > # minimum_mappable_gid = -1
	I1026 01:12:42.446845  104058 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1026 01:12:42.446859  104058 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1026 01:12:42.446875  104058 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1026 01:12:42.446887  104058 command_runner.go:130] > # ctr_stop_timeout = 30
	I1026 01:12:42.446898  104058 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1026 01:12:42.446918  104058 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1026 01:12:42.446931  104058 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1026 01:12:42.446943  104058 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1026 01:12:42.446954  104058 command_runner.go:130] > # drop_infra_ctr = true
	I1026 01:12:42.446970  104058 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1026 01:12:42.446983  104058 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1026 01:12:42.447001  104058 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1026 01:12:42.447012  104058 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1026 01:12:42.447026  104058 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1026 01:12:42.447038  104058 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1026 01:12:42.447050  104058 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1026 01:12:42.447065  104058 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1026 01:12:42.447075  104058 command_runner.go:130] > # pinns_path = ""
	I1026 01:12:42.447087  104058 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1026 01:12:42.447101  104058 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1026 01:12:42.447116  104058 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1026 01:12:42.447127  104058 command_runner.go:130] > # default_runtime = "runc"
	I1026 01:12:42.447140  104058 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1026 01:12:42.447156  104058 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1026 01:12:42.447175  104058 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1026 01:12:42.447187  104058 command_runner.go:130] > # creation as a file is not desired either.
	I1026 01:12:42.447205  104058 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1026 01:12:42.447218  104058 command_runner.go:130] > # the hostname is being managed dynamically.
	I1026 01:12:42.447229  104058 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1026 01:12:42.447242  104058 command_runner.go:130] > # ]
	I1026 01:12:42.447257  104058 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1026 01:12:42.447272  104058 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1026 01:12:42.447286  104058 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1026 01:12:42.447301  104058 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1026 01:12:42.447310  104058 command_runner.go:130] > #
	I1026 01:12:42.447319  104058 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1026 01:12:42.447331  104058 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1026 01:12:42.447341  104058 command_runner.go:130] > #  runtime_type = "oci"
	I1026 01:12:42.447353  104058 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1026 01:12:42.447366  104058 command_runner.go:130] > #  privileged_without_host_devices = false
	I1026 01:12:42.447377  104058 command_runner.go:130] > #  allowed_annotations = []
	I1026 01:12:42.447388  104058 command_runner.go:130] > # Where:
	I1026 01:12:42.447401  104058 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1026 01:12:42.447415  104058 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1026 01:12:42.447429  104058 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1026 01:12:42.447443  104058 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1026 01:12:42.447452  104058 command_runner.go:130] > #   in $PATH.
	I1026 01:12:42.447473  104058 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1026 01:12:42.447486  104058 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1026 01:12:42.447500  104058 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1026 01:12:42.447510  104058 command_runner.go:130] > #   state.
	I1026 01:12:42.447530  104058 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1026 01:12:42.447543  104058 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1026 01:12:42.447557  104058 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1026 01:12:42.447570  104058 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1026 01:12:42.447584  104058 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1026 01:12:42.447599  104058 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1026 01:12:42.447611  104058 command_runner.go:130] > #   The currently recognized values are:
	I1026 01:12:42.447626  104058 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1026 01:12:42.447641  104058 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1026 01:12:42.447656  104058 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1026 01:12:42.447670  104058 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1026 01:12:42.447687  104058 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1026 01:12:42.447702  104058 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1026 01:12:42.447719  104058 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1026 01:12:42.447737  104058 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1026 01:12:42.447749  104058 command_runner.go:130] > #   should be moved to the container's cgroup
	I1026 01:12:42.447761  104058 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1026 01:12:42.447774  104058 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1026 01:12:42.447784  104058 command_runner.go:130] > runtime_type = "oci"
	I1026 01:12:42.447793  104058 command_runner.go:130] > runtime_root = "/run/runc"
	I1026 01:12:42.447803  104058 command_runner.go:130] > runtime_config_path = ""
	I1026 01:12:42.447814  104058 command_runner.go:130] > monitor_path = ""
	I1026 01:12:42.447823  104058 command_runner.go:130] > monitor_cgroup = ""
	I1026 01:12:42.447833  104058 command_runner.go:130] > monitor_exec_cgroup = ""
	I1026 01:12:42.447903  104058 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1026 01:12:42.447914  104058 command_runner.go:130] > # running containers
	I1026 01:12:42.447921  104058 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1026 01:12:42.447932  104058 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1026 01:12:42.447948  104058 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1026 01:12:42.447961  104058 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1026 01:12:42.447974  104058 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1026 01:12:42.447986  104058 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1026 01:12:42.448002  104058 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1026 01:12:42.448013  104058 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1026 01:12:42.448022  104058 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1026 01:12:42.448031  104058 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1026 01:12:42.448046  104058 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1026 01:12:42.448059  104058 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1026 01:12:42.448074  104058 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1026 01:12:42.448090  104058 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1026 01:12:42.448106  104058 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1026 01:12:42.448120  104058 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1026 01:12:42.448139  104058 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1026 01:12:42.448156  104058 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1026 01:12:42.448170  104058 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1026 01:12:42.448189  104058 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1026 01:12:42.448200  104058 command_runner.go:130] > # Example:
	I1026 01:12:42.448212  104058 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1026 01:12:42.448224  104058 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1026 01:12:42.448237  104058 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1026 01:12:42.448252  104058 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1026 01:12:42.448262  104058 command_runner.go:130] > # cpuset = 0
	I1026 01:12:42.448270  104058 command_runner.go:130] > # cpushares = "0-1"
	I1026 01:12:42.448279  104058 command_runner.go:130] > # Where:
	I1026 01:12:42.448290  104058 command_runner.go:130] > # The workload name is workload-type.
	I1026 01:12:42.448306  104058 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1026 01:12:42.448319  104058 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1026 01:12:42.448333  104058 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1026 01:12:42.448349  104058 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1026 01:12:42.448368  104058 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1026 01:12:42.448382  104058 command_runner.go:130] > # 
	I1026 01:12:42.448398  104058 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1026 01:12:42.448407  104058 command_runner.go:130] > #
	I1026 01:12:42.448418  104058 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1026 01:12:42.448432  104058 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1026 01:12:42.448447  104058 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1026 01:12:42.448461  104058 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1026 01:12:42.448475  104058 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1026 01:12:42.448499  104058 command_runner.go:130] > [crio.image]
	I1026 01:12:42.448531  104058 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1026 01:12:42.448543  104058 command_runner.go:130] > # default_transport = "docker://"
	I1026 01:12:42.448557  104058 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1026 01:12:42.448571  104058 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1026 01:12:42.448583  104058 command_runner.go:130] > # global_auth_file = ""
	I1026 01:12:42.448596  104058 command_runner.go:130] > # The image used to instantiate infra containers.
	I1026 01:12:42.448608  104058 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:12:42.448620  104058 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1026 01:12:42.448636  104058 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1026 01:12:42.448649  104058 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1026 01:12:42.448661  104058 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:12:42.448673  104058 command_runner.go:130] > # pause_image_auth_file = ""
	I1026 01:12:42.448687  104058 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1026 01:12:42.448701  104058 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1026 01:12:42.448715  104058 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1026 01:12:42.448729  104058 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1026 01:12:42.448739  104058 command_runner.go:130] > # pause_command = "/pause"
	I1026 01:12:42.448753  104058 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1026 01:12:42.448768  104058 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1026 01:12:42.448783  104058 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1026 01:12:42.448797  104058 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1026 01:12:42.448810  104058 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1026 01:12:42.448821  104058 command_runner.go:130] > # signature_policy = ""
	I1026 01:12:42.448843  104058 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1026 01:12:42.448858  104058 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1026 01:12:42.448869  104058 command_runner.go:130] > # changing them here.
	I1026 01:12:42.448881  104058 command_runner.go:130] > # insecure_registries = [
	I1026 01:12:42.448891  104058 command_runner.go:130] > # ]
	I1026 01:12:42.448903  104058 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1026 01:12:42.448916  104058 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1026 01:12:42.448926  104058 command_runner.go:130] > # image_volumes = "mkdir"
	I1026 01:12:42.448937  104058 command_runner.go:130] > # Temporary directory to use for storing big files
	I1026 01:12:42.448948  104058 command_runner.go:130] > # big_files_temporary_dir = ""
	I1026 01:12:42.448963  104058 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1026 01:12:42.448972  104058 command_runner.go:130] > # CNI plugins.
	I1026 01:12:42.448984  104058 command_runner.go:130] > [crio.network]
	I1026 01:12:42.448998  104058 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1026 01:12:42.449011  104058 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1026 01:12:42.449022  104058 command_runner.go:130] > # cni_default_network = ""
	I1026 01:12:42.449033  104058 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1026 01:12:42.449045  104058 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1026 01:12:42.449058  104058 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1026 01:12:42.449065  104058 command_runner.go:130] > # plugin_dirs = [
	I1026 01:12:42.449076  104058 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1026 01:12:42.449085  104058 command_runner.go:130] > # ]
	I1026 01:12:42.449096  104058 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1026 01:12:42.449106  104058 command_runner.go:130] > [crio.metrics]
	I1026 01:12:42.449118  104058 command_runner.go:130] > # Globally enable or disable metrics support.
	I1026 01:12:42.449129  104058 command_runner.go:130] > # enable_metrics = false
	I1026 01:12:42.449139  104058 command_runner.go:130] > # Specify enabled metrics collectors.
	I1026 01:12:42.449151  104058 command_runner.go:130] > # Per default all metrics are enabled.
	I1026 01:12:42.449165  104058 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1026 01:12:42.449180  104058 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1026 01:12:42.449199  104058 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1026 01:12:42.449215  104058 command_runner.go:130] > # metrics_collectors = [
	I1026 01:12:42.449224  104058 command_runner.go:130] > # 	"operations",
	I1026 01:12:42.449236  104058 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1026 01:12:42.449248  104058 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1026 01:12:42.449259  104058 command_runner.go:130] > # 	"operations_errors",
	I1026 01:12:42.449270  104058 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1026 01:12:42.449279  104058 command_runner.go:130] > # 	"image_pulls_by_name",
	I1026 01:12:42.449290  104058 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1026 01:12:42.449301  104058 command_runner.go:130] > # 	"image_pulls_failures",
	I1026 01:12:42.449311  104058 command_runner.go:130] > # 	"image_pulls_successes",
	I1026 01:12:42.449319  104058 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1026 01:12:42.449330  104058 command_runner.go:130] > # 	"image_layer_reuse",
	I1026 01:12:42.449341  104058 command_runner.go:130] > # 	"containers_oom_total",
	I1026 01:12:42.449349  104058 command_runner.go:130] > # 	"containers_oom",
	I1026 01:12:42.449360  104058 command_runner.go:130] > # 	"processes_defunct",
	I1026 01:12:42.449371  104058 command_runner.go:130] > # 	"operations_total",
	I1026 01:12:42.449382  104058 command_runner.go:130] > # 	"operations_latency_seconds",
	I1026 01:12:42.449399  104058 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1026 01:12:42.449410  104058 command_runner.go:130] > # 	"operations_errors_total",
	I1026 01:12:42.449418  104058 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1026 01:12:42.449430  104058 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1026 01:12:42.449441  104058 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1026 01:12:42.449451  104058 command_runner.go:130] > # 	"image_pulls_success_total",
	I1026 01:12:42.449462  104058 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1026 01:12:42.449473  104058 command_runner.go:130] > # 	"containers_oom_count_total",
	I1026 01:12:42.449482  104058 command_runner.go:130] > # ]
	I1026 01:12:42.449492  104058 command_runner.go:130] > # The port on which the metrics server will listen.
	I1026 01:12:42.449502  104058 command_runner.go:130] > # metrics_port = 9090
	I1026 01:12:42.449512  104058 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1026 01:12:42.449526  104058 command_runner.go:130] > # metrics_socket = ""
	I1026 01:12:42.449539  104058 command_runner.go:130] > # The certificate for the secure metrics server.
	I1026 01:12:42.449553  104058 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1026 01:12:42.449567  104058 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1026 01:12:42.449579  104058 command_runner.go:130] > # certificate on any modification event.
	I1026 01:12:42.449589  104058 command_runner.go:130] > # metrics_cert = ""
	I1026 01:12:42.449603  104058 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1026 01:12:42.449616  104058 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1026 01:12:42.449627  104058 command_runner.go:130] > # metrics_key = ""
	I1026 01:12:42.449641  104058 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1026 01:12:42.449650  104058 command_runner.go:130] > [crio.tracing]
	I1026 01:12:42.449662  104058 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1026 01:12:42.449710  104058 command_runner.go:130] > # enable_tracing = false
	I1026 01:12:42.449724  104058 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1026 01:12:42.449734  104058 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1026 01:12:42.449746  104058 command_runner.go:130] > # Number of samples to collect per million spans.
	I1026 01:12:42.449758  104058 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1026 01:12:42.449772  104058 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1026 01:12:42.449785  104058 command_runner.go:130] > [crio.stats]
	I1026 01:12:42.449798  104058 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1026 01:12:42.449812  104058 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1026 01:12:42.449823  104058 command_runner.go:130] > # stats_collection_period = 0
	I1026 01:12:42.449917  104058 cni.go:84] Creating CNI manager for ""
	I1026 01:12:42.449930  104058 cni.go:136] 1 nodes found, recommending kindnet
	I1026 01:12:42.449953  104058 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1026 01:12:42.449983  104058 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-204768 NodeName:multinode-204768 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:12:42.450156  104058 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-204768"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:12:42.450247  104058 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-204768 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-204768 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1026 01:12:42.450305  104058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1026 01:12:42.458034  104058 command_runner.go:130] > kubeadm
	I1026 01:12:42.458056  104058 command_runner.go:130] > kubectl
	I1026 01:12:42.458064  104058 command_runner.go:130] > kubelet
	I1026 01:12:42.458679  104058 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:12:42.458749  104058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 01:12:42.466414  104058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1026 01:12:42.482492  104058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:12:42.498077  104058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1026 01:12:42.513905  104058 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1026 01:12:42.517115  104058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:12:42.526937  104058 certs.go:56] Setting up /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768 for IP: 192.168.58.2
	I1026 01:12:42.526975  104058 certs.go:190] acquiring lock for shared ca certs: {Name:mk5c45c423cc5a6761a0ccf5b25a0c8b531fe271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:12:42.527115  104058 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key
	I1026 01:12:42.527185  104058 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key
	I1026 01:12:42.527249  104058 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.key
	I1026 01:12:42.527273  104058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.crt with IP's: []
	I1026 01:12:42.788206  104058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.crt ...
	I1026 01:12:42.788241  104058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.crt: {Name:mk4eb8757b4eea01873859a1e3f6e76056c578eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:12:42.788438  104058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.key ...
	I1026 01:12:42.788457  104058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.key: {Name:mk9848f1703b393f314a5b3df23fc0ff628eecf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:12:42.788573  104058 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.key.cee25041
	I1026 01:12:42.788591  104058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1026 01:12:42.891256  104058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.crt.cee25041 ...
	I1026 01:12:42.891298  104058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.crt.cee25041: {Name:mk0e8459add55effc2b6b35e2fa8377b0cf0b004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:12:42.891493  104058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.key.cee25041 ...
	I1026 01:12:42.891507  104058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.key.cee25041: {Name:mk9a72c384174122dd2cecb662583462d449a3e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:12:42.891582  104058 certs.go:337] copying /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.crt
	I1026 01:12:42.891680  104058 certs.go:341] copying /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.key
	I1026 01:12:42.891742  104058 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/proxy-client.key
	I1026 01:12:42.891758  104058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/proxy-client.crt with IP's: []
	I1026 01:12:42.964621  104058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/proxy-client.crt ...
	I1026 01:12:42.964651  104058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/proxy-client.crt: {Name:mk070c537a15381d944930185d525e30caaa2883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:12:42.964810  104058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/proxy-client.key ...
	I1026 01:12:42.964821  104058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/proxy-client.key: {Name:mkac8bea10f41fe0e3900972fd85313e3ef72add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:12:42.964894  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:12:42.964913  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:12:42.964924  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:12:42.964934  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:12:42.964949  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:12:42.964959  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:12:42.964970  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:12:42.964983  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:12:42.965032  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246.pem (1338 bytes)
	W1026 01:12:42.965068  104058 certs.go:433] ignoring /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246_empty.pem, impossibly tiny 0 bytes
	I1026 01:12:42.965079  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 01:12:42.965099  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem (1078 bytes)
	I1026 01:12:42.965121  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:12:42.965148  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem (1675 bytes)
	I1026 01:12:42.965187  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem (1708 bytes)
	I1026 01:12:42.965211  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246.pem -> /usr/share/ca-certificates/15246.pem
	I1026 01:12:42.965236  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem -> /usr/share/ca-certificates/152462.pem
	I1026 01:12:42.965257  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:12:42.965802  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1026 01:12:42.987165  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 01:12:43.007840  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:12:43.029054  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 01:12:43.050838  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:12:43.072770  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:12:43.093683  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:12:43.114240  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1026 01:12:43.134803  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246.pem --> /usr/share/ca-certificates/15246.pem (1338 bytes)
	I1026 01:12:43.155720  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem --> /usr/share/ca-certificates/152462.pem (1708 bytes)
	I1026 01:12:43.176314  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:12:43.196982  104058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:12:43.212162  104058 ssh_runner.go:195] Run: openssl version
	I1026 01:12:43.216784  104058 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1026 01:12:43.216965  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15246.pem && ln -fs /usr/share/ca-certificates/15246.pem /etc/ssl/certs/15246.pem"
	I1026 01:12:43.225048  104058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15246.pem
	I1026 01:12:43.228133  104058 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 01:00 /usr/share/ca-certificates/15246.pem
	I1026 01:12:43.228155  104058 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 01:00 /usr/share/ca-certificates/15246.pem
	I1026 01:12:43.228196  104058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15246.pem
	I1026 01:12:43.234240  104058 command_runner.go:130] > 51391683
	I1026 01:12:43.234459  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15246.pem /etc/ssl/certs/51391683.0"
	I1026 01:12:43.242737  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152462.pem && ln -fs /usr/share/ca-certificates/152462.pem /etc/ssl/certs/152462.pem"
	I1026 01:12:43.251166  104058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152462.pem
	I1026 01:12:43.254248  104058 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 01:00 /usr/share/ca-certificates/152462.pem
	I1026 01:12:43.254272  104058 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 01:00 /usr/share/ca-certificates/152462.pem
	I1026 01:12:43.254304  104058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152462.pem
	I1026 01:12:43.260220  104058 command_runner.go:130] > 3ec20f2e
	I1026 01:12:43.260367  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152462.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:12:43.268566  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:12:43.276747  104058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:12:43.279855  104058 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 00:54 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:12:43.279887  104058 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:54 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:12:43.279926  104058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:12:43.285771  104058 command_runner.go:130] > b5213941
	I1026 01:12:43.285960  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:12:43.293970  104058 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1026 01:12:43.296853  104058 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1026 01:12:43.296897  104058 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1026 01:12:43.296964  104058 kubeadm.go:404] StartCluster: {Name:multinode-204768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-204768 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 01:12:43.297062  104058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:12:43.297121  104058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:12:43.329231  104058 cri.go:89] found id: ""
	I1026 01:12:43.329299  104058 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 01:12:43.336759  104058 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1026 01:12:43.336780  104058 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1026 01:12:43.336790  104058 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1026 01:12:43.337471  104058 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 01:12:43.345117  104058 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1026 01:12:43.345179  104058 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:12:43.352587  104058 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1026 01:12:43.352616  104058 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1026 01:12:43.352625  104058 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1026 01:12:43.352643  104058 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:12:43.352671  104058 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:12:43.352706  104058 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 01:12:43.431244  104058 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1026 01:12:43.431279  104058 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1026 01:12:43.496617  104058 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:12:43.496645  104058 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:12:52.201034  104058 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1026 01:12:52.201068  104058 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1026 01:12:52.201120  104058 kubeadm.go:322] [preflight] Running pre-flight checks
	I1026 01:12:52.201132  104058 command_runner.go:130] > [preflight] Running pre-flight checks
	I1026 01:12:52.201242  104058 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1026 01:12:52.201254  104058 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1026 01:12:52.201321  104058 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-gcp
	I1026 01:12:52.201331  104058 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-gcp
	I1026 01:12:52.201379  104058 kubeadm.go:322] OS: Linux
	I1026 01:12:52.201388  104058 command_runner.go:130] > OS: Linux
	I1026 01:12:52.201447  104058 kubeadm.go:322] CGROUPS_CPU: enabled
	I1026 01:12:52.201456  104058 command_runner.go:130] > CGROUPS_CPU: enabled
	I1026 01:12:52.201518  104058 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1026 01:12:52.201528  104058 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1026 01:12:52.201586  104058 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1026 01:12:52.201595  104058 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1026 01:12:52.201706  104058 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1026 01:12:52.201731  104058 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1026 01:12:52.201804  104058 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1026 01:12:52.201822  104058 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1026 01:12:52.201908  104058 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1026 01:12:52.201919  104058 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1026 01:12:52.201970  104058 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1026 01:12:52.201977  104058 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1026 01:12:52.202027  104058 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1026 01:12:52.202038  104058 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1026 01:12:52.202098  104058 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1026 01:12:52.202110  104058 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1026 01:12:52.202208  104058 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:12:52.202233  104058 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:12:52.202383  104058 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:12:52.202398  104058 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:12:52.202525  104058 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 01:12:52.202539  104058 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 01:12:52.202627  104058 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:12:52.204024  104058 out.go:204]   - Generating certificates and keys ...
	I1026 01:12:52.202722  104058 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:12:52.204138  104058 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1026 01:12:52.204153  104058 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1026 01:12:52.204240  104058 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1026 01:12:52.204251  104058 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1026 01:12:52.204346  104058 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 01:12:52.204357  104058 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 01:12:52.204438  104058 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1026 01:12:52.204446  104058 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1026 01:12:52.204532  104058 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1026 01:12:52.204540  104058 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1026 01:12:52.204606  104058 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1026 01:12:52.204617  104058 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1026 01:12:52.204697  104058 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1026 01:12:52.204705  104058 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1026 01:12:52.204856  104058 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-204768] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1026 01:12:52.204867  104058 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-204768] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1026 01:12:52.204935  104058 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1026 01:12:52.204948  104058 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1026 01:12:52.205096  104058 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-204768] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1026 01:12:52.205108  104058 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-204768] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1026 01:12:52.205194  104058 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 01:12:52.205206  104058 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 01:12:52.205286  104058 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 01:12:52.205297  104058 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 01:12:52.205351  104058 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1026 01:12:52.205361  104058 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1026 01:12:52.205431  104058 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:12:52.205442  104058 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:12:52.205507  104058 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:12:52.205536  104058 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:12:52.205611  104058 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:12:52.205623  104058 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:12:52.205721  104058 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:12:52.205733  104058 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:12:52.205804  104058 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:12:52.205816  104058 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:12:52.205915  104058 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:12:52.205927  104058 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:12:52.206009  104058 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:12:52.207512  104058 out.go:204]   - Booting up control plane ...
	I1026 01:12:52.206094  104058 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:12:52.207649  104058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:12:52.207668  104058 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:12:52.207760  104058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:12:52.207786  104058 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:12:52.207886  104058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:12:52.207903  104058 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:12:52.208046  104058 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:12:52.208071  104058 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:12:52.208181  104058 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:12:52.208190  104058 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:12:52.208221  104058 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1026 01:12:52.208228  104058 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1026 01:12:52.208351  104058 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 01:12:52.208369  104058 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 01:12:52.208485  104058 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.502385 seconds
	I1026 01:12:52.208497  104058 command_runner.go:130] > [apiclient] All control plane components are healthy after 4.502385 seconds
	I1026 01:12:52.208637  104058 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 01:12:52.208648  104058 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 01:12:52.208745  104058 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 01:12:52.208752  104058 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 01:12:52.208800  104058 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 01:12:52.208807  104058 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1026 01:12:52.208977  104058 kubeadm.go:322] [mark-control-plane] Marking the node multinode-204768 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 01:12:52.208985  104058 command_runner.go:130] > [mark-control-plane] Marking the node multinode-204768 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 01:12:52.209041  104058 kubeadm.go:322] [bootstrap-token] Using token: gxvmwu.4yw8ik4z5q33kqvz
	I1026 01:12:52.209058  104058 command_runner.go:130] > [bootstrap-token] Using token: gxvmwu.4yw8ik4z5q33kqvz
	I1026 01:12:52.210600  104058 out.go:204]   - Configuring RBAC rules ...
	I1026 01:12:52.210745  104058 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 01:12:52.210757  104058 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 01:12:52.210898  104058 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 01:12:52.210914  104058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 01:12:52.211075  104058 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 01:12:52.211082  104058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 01:12:52.211217  104058 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 01:12:52.211235  104058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 01:12:52.211376  104058 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 01:12:52.211387  104058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 01:12:52.211527  104058 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 01:12:52.211539  104058 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 01:12:52.211697  104058 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 01:12:52.211708  104058 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 01:12:52.211774  104058 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1026 01:12:52.211784  104058 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1026 01:12:52.211838  104058 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1026 01:12:52.211848  104058 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1026 01:12:52.211854  104058 kubeadm.go:322] 
	I1026 01:12:52.211939  104058 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1026 01:12:52.211949  104058 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1026 01:12:52.211954  104058 kubeadm.go:322] 
	I1026 01:12:52.212037  104058 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1026 01:12:52.212047  104058 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1026 01:12:52.212053  104058 kubeadm.go:322] 
	I1026 01:12:52.212087  104058 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1026 01:12:52.212096  104058 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1026 01:12:52.212195  104058 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 01:12:52.212207  104058 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 01:12:52.212247  104058 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 01:12:52.212253  104058 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 01:12:52.212259  104058 kubeadm.go:322] 
	I1026 01:12:52.212314  104058 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1026 01:12:52.212332  104058 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1026 01:12:52.212353  104058 kubeadm.go:322] 
	I1026 01:12:52.212422  104058 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 01:12:52.212432  104058 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 01:12:52.212444  104058 kubeadm.go:322] 
	I1026 01:12:52.212524  104058 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1026 01:12:52.212536  104058 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1026 01:12:52.212639  104058 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 01:12:52.212656  104058 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 01:12:52.212758  104058 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 01:12:52.212774  104058 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 01:12:52.212792  104058 kubeadm.go:322] 
	I1026 01:12:52.212900  104058 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1026 01:12:52.212910  104058 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 01:12:52.213006  104058 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1026 01:12:52.213015  104058 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1026 01:12:52.213020  104058 kubeadm.go:322] 
	I1026 01:12:52.213125  104058 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token gxvmwu.4yw8ik4z5q33kqvz \
	I1026 01:12:52.213136  104058 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gxvmwu.4yw8ik4z5q33kqvz \
	I1026 01:12:52.213262  104058 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa \
	I1026 01:12:52.213274  104058 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa \
	I1026 01:12:52.213303  104058 command_runner.go:130] > 	--control-plane 
	I1026 01:12:52.213313  104058 kubeadm.go:322] 	--control-plane 
	I1026 01:12:52.213324  104058 kubeadm.go:322] 
	I1026 01:12:52.213442  104058 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1026 01:12:52.213451  104058 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1026 01:12:52.213457  104058 kubeadm.go:322] 
	I1026 01:12:52.213565  104058 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gxvmwu.4yw8ik4z5q33kqvz \
	I1026 01:12:52.213606  104058 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gxvmwu.4yw8ik4z5q33kqvz \
	I1026 01:12:52.213799  104058 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa 
	I1026 01:12:52.213820  104058 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa 
	I1026 01:12:52.213828  104058 cni.go:84] Creating CNI manager for ""
	I1026 01:12:52.213839  104058 cni.go:136] 1 nodes found, recommending kindnet
	I1026 01:12:52.215382  104058 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1026 01:12:52.216975  104058 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 01:12:52.221580  104058 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1026 01:12:52.221605  104058 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I1026 01:12:52.221615  104058 command_runner.go:130] > Device: 36h/54d	Inode: 804964      Links: 1
	I1026 01:12:52.221635  104058 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1026 01:12:52.221646  104058 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1026 01:12:52.221654  104058 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1026 01:12:52.221662  104058 command_runner.go:130] > Change: 2023-10-26 00:53:54.615237767 +0000
	I1026 01:12:52.221689  104058 command_runner.go:130] >  Birth: 2023-10-26 00:53:54.591235463 +0000
	I1026 01:12:52.221750  104058 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1026 01:12:52.221764  104058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1026 01:12:52.237528  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 01:12:52.917638  104058 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1026 01:12:52.917659  104058 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1026 01:12:52.917686  104058 command_runner.go:130] > serviceaccount/kindnet created
	I1026 01:12:52.917693  104058 command_runner.go:130] > daemonset.apps/kindnet created
	I1026 01:12:52.917734  104058 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 01:12:52.917827  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:52.917827  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=af1d352f1030f8f3ea7f97e311e7fe82ef319942 minikube.k8s.io/name=multinode-204768 minikube.k8s.io/updated_at=2023_10_26T01_12_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:52.925029  104058 command_runner.go:130] > -16
	I1026 01:12:52.925066  104058 ops.go:34] apiserver oom_adj: -16
	I1026 01:12:53.009799  104058 command_runner.go:130] > node/multinode-204768 labeled
	I1026 01:12:53.012255  104058 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1026 01:12:53.012394  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:53.076574  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:53.076684  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:53.197389  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:53.698117  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:53.765060  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:54.197624  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:54.262694  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:54.698371  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:54.761130  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:55.197724  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:55.261120  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:55.697791  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:55.760023  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:56.198597  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:56.261742  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:56.698336  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:56.758603  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:57.197707  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:57.260481  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:57.698063  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:57.759333  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:58.197647  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:58.260838  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:58.698199  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:58.759727  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:59.197873  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:59.263000  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:12:59.697602  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:12:59.761987  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:13:00.197566  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:13:00.260523  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:13:00.698428  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:13:00.759583  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:13:01.197561  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:13:01.258049  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:13:01.698520  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:13:01.761382  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:13:02.197952  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:13:02.264324  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:13:02.698574  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:13:02.760768  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:13:03.198219  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:13:03.259203  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:13:03.697733  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:13:03.762506  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:13:04.198088  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:13:04.258613  104058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1026 01:13:04.697598  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:13:04.768453  104058 command_runner.go:130] > NAME      SECRETS   AGE
	I1026 01:13:04.768479  104058 command_runner.go:130] > default   0         0s
	I1026 01:13:04.768501  104058 kubeadm.go:1081] duration metric: took 11.850737446s to wait for elevateKubeSystemPrivileges.
	I1026 01:13:04.768514  104058 kubeadm.go:406] StartCluster complete in 21.471556877s
	I1026 01:13:04.768531  104058 settings.go:142] acquiring lock: {Name:mk3f6a6b512050e15c823ee035bfa16b068e5bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:13:04.768598  104058 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:13:04.769257  104058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/kubeconfig: {Name:mkd7fc4e7a7060baa25a329208944605474cc380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:13:04.769615  104058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 01:13:04.769710  104058 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1026 01:13:04.769795  104058 addons.go:69] Setting storage-provisioner=true in profile "multinode-204768"
	I1026 01:13:04.769813  104058 addons.go:231] Setting addon storage-provisioner=true in "multinode-204768"
	I1026 01:13:04.769827  104058 addons.go:69] Setting default-storageclass=true in profile "multinode-204768"
	I1026 01:13:04.769849  104058 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-204768"
	I1026 01:13:04.769850  104058 config.go:182] Loaded profile config "multinode-204768": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:13:04.769858  104058 host.go:66] Checking if "multinode-204768" exists ...
	I1026 01:13:04.769981  104058 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:13:04.770231  104058 cli_runner.go:164] Run: docker container inspect multinode-204768 --format={{.State.Status}}
	I1026 01:13:04.770295  104058 kapi.go:59] client config for multinode-204768: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.crt", KeyFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.key", CAFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:13:04.770388  104058 cli_runner.go:164] Run: docker container inspect multinode-204768 --format={{.State.Status}}
	I1026 01:13:04.771455  104058 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1026 01:13:04.771479  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:04.771492  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:04.771501  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:04.771737  104058 cert_rotation.go:137] Starting client certificate rotation controller
	I1026 01:13:04.788829  104058 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:13:04.789046  104058 kapi.go:59] client config for multinode-204768: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.crt", KeyFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.key", CAFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:13:04.791959  104058 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:13:04.789316  104058 addons.go:231] Setting addon default-storageclass=true in "multinode-204768"
	I1026 01:13:04.791714  104058 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1026 01:13:04.793722  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:04.793738  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:04.793752  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:04.793761  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:04.793770  104058 round_trippers.go:580]     Content-Length: 291
	I1026 01:13:04.793786  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:04 GMT
	I1026 01:13:04.793795  104058 round_trippers.go:580]     Audit-Id: ef074ec4-6b4c-493b-99fe-98992a544b25
	I1026 01:13:04.793817  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:04.793853  104058 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"748d54dc-a561-49f3-94e8-d26ebdbe621b","resourceVersion":"269","creationTimestamp":"2023-10-26T01:12:51Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1026 01:13:04.793905  104058 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:13:04.793921  104058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 01:13:04.793976  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:13:04.794066  104058 host.go:66] Checking if "multinode-204768" exists ...
	I1026 01:13:04.794403  104058 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"748d54dc-a561-49f3-94e8-d26ebdbe621b","resourceVersion":"269","creationTimestamp":"2023-10-26T01:12:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1026 01:13:04.794480  104058 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1026 01:13:04.794490  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:04.794501  104058 round_trippers.go:473]     Content-Type: application/json
	I1026 01:13:04.794511  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:04.794520  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:04.794662  104058 cli_runner.go:164] Run: docker container inspect multinode-204768 --format={{.State.Status}}
	I1026 01:13:04.804429  104058 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1026 01:13:04.804456  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:04.804474  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:04.804483  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:04.804492  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:04.804500  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:04.804510  104058 round_trippers.go:580]     Content-Length: 291
	I1026 01:13:04.804518  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:04 GMT
	I1026 01:13:04.804527  104058 round_trippers.go:580]     Audit-Id: 0fba4099-bf62-4440-a5a2-1c4494353ed3
	I1026 01:13:04.804553  104058 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"748d54dc-a561-49f3-94e8-d26ebdbe621b","resourceVersion":"358","creationTimestamp":"2023-10-26T01:12:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1026 01:13:04.804722  104058 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1026 01:13:04.804733  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:04.804743  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:04.804751  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:04.807198  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:04.807222  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:04.807231  104058 round_trippers.go:580]     Audit-Id: 98e198a5-300b-451f-a6f6-fdf27ff01e58
	I1026 01:13:04.807240  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:04.807248  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:04.807256  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:04.807264  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:04.807271  104058 round_trippers.go:580]     Content-Length: 291
	I1026 01:13:04.807278  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:04 GMT
	I1026 01:13:04.807310  104058 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"748d54dc-a561-49f3-94e8-d26ebdbe621b","resourceVersion":"358","creationTimestamp":"2023-10-26T01:12:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1026 01:13:04.807481  104058 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-204768" context rescaled to 1 replicas
	I1026 01:13:04.807525  104058 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:13:04.809560  104058 out.go:177] * Verifying Kubernetes components...
	I1026 01:13:04.811328  104058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:13:04.818778  104058 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 01:13:04.818806  104058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 01:13:04.818862  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:13:04.825797  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa Username:docker}
	I1026 01:13:04.843309  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa Username:docker}
	I1026 01:13:05.013880  104058 command_runner.go:130] > apiVersion: v1
	I1026 01:13:05.013910  104058 command_runner.go:130] > data:
	I1026 01:13:05.013918  104058 command_runner.go:130] >   Corefile: |
	I1026 01:13:05.013926  104058 command_runner.go:130] >     .:53 {
	I1026 01:13:05.013933  104058 command_runner.go:130] >         errors
	I1026 01:13:05.013942  104058 command_runner.go:130] >         health {
	I1026 01:13:05.013956  104058 command_runner.go:130] >            lameduck 5s
	I1026 01:13:05.013969  104058 command_runner.go:130] >         }
	I1026 01:13:05.013977  104058 command_runner.go:130] >         ready
	I1026 01:13:05.013993  104058 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1026 01:13:05.014006  104058 command_runner.go:130] >            pods insecure
	I1026 01:13:05.014016  104058 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1026 01:13:05.014029  104058 command_runner.go:130] >            ttl 30
	I1026 01:13:05.014040  104058 command_runner.go:130] >         }
	I1026 01:13:05.014047  104058 command_runner.go:130] >         prometheus :9153
	I1026 01:13:05.014056  104058 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1026 01:13:05.014065  104058 command_runner.go:130] >            max_concurrent 1000
	I1026 01:13:05.014076  104058 command_runner.go:130] >         }
	I1026 01:13:05.014083  104058 command_runner.go:130] >         cache 30
	I1026 01:13:05.014093  104058 command_runner.go:130] >         loop
	I1026 01:13:05.014104  104058 command_runner.go:130] >         reload
	I1026 01:13:05.014112  104058 command_runner.go:130] >         loadbalance
	I1026 01:13:05.014122  104058 command_runner.go:130] >     }
	I1026 01:13:05.014131  104058 command_runner.go:130] > kind: ConfigMap
	I1026 01:13:05.014142  104058 command_runner.go:130] > metadata:
	I1026 01:13:05.014152  104058 command_runner.go:130] >   creationTimestamp: "2023-10-26T01:12:51Z"
	I1026 01:13:05.014160  104058 command_runner.go:130] >   name: coredns
	I1026 01:13:05.014200  104058 command_runner.go:130] >   namespace: kube-system
	I1026 01:13:05.014212  104058 command_runner.go:130] >   resourceVersion: "265"
	I1026 01:13:05.014221  104058 command_runner.go:130] >   uid: 1c75d2c0-619f-4604-81fb-513cfa86b559
	I1026 01:13:05.015073  104058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:13:05.017560  104058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 01:13:05.017807  104058 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:13:05.018192  104058 kapi.go:59] client config for multinode-204768: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.crt", KeyFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.key", CAFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:13:05.018506  104058 node_ready.go:35] waiting up to 6m0s for node "multinode-204768" to be "Ready" ...
	I1026 01:13:05.018587  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:05.018597  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:05.018609  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:05.018620  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:05.021166  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:05.021186  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:05.021206  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:05.021214  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:05 GMT
	I1026 01:13:05.021222  104058 round_trippers.go:580]     Audit-Id: 22114cd2-7653-4c39-9fdb-e44362f6ad0f
	I1026 01:13:05.021231  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:05.021239  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:05.021247  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:05.021380  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:05.022071  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:05.022080  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:05.022087  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:05.022093  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:05.098822  104058 round_trippers.go:574] Response Status: 200 OK in 76 milliseconds
	I1026 01:13:05.098850  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:05.098860  104058 round_trippers.go:580]     Audit-Id: 46d3aaac-9be0-439a-a49b-978cb98ae561
	I1026 01:13:05.098868  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:05.098875  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:05.098881  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:05.098898  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:05.098910  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:05 GMT
	I1026 01:13:05.099376  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:05.116526  104058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 01:13:05.600086  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:05.600110  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:05.600120  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:05.600129  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:05.612720  104058 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1026 01:13:05.612747  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:05.612757  104058 round_trippers.go:580]     Audit-Id: 162df51f-171e-4dd7-a3de-c788ad1549b6
	I1026 01:13:05.612765  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:05.612772  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:05.612779  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:05.612786  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:05.612793  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:05 GMT
	I1026 01:13:05.612961  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:06.008738  104058 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1026 01:13:06.013859  104058 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1026 01:13:06.020841  104058 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1026 01:13:06.028487  104058 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1026 01:13:06.035218  104058 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1026 01:13:06.044028  104058 command_runner.go:130] > pod/storage-provisioner created
	I1026 01:13:06.049755  104058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.034623949s)
	I1026 01:13:06.049800  104058 command_runner.go:130] > configmap/coredns replaced
	I1026 01:13:06.049854  104058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.032265355s)
	I1026 01:13:06.049864  104058 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1026 01:13:06.049874  104058 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1026 01:13:06.049965  104058 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1026 01:13:06.049976  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:06.049987  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:06.049998  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:06.051886  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:13:06.051906  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:06.051915  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:06 GMT
	I1026 01:13:06.051922  104058 round_trippers.go:580]     Audit-Id: 2c416c21-9a8f-4f58-a139-ea4bd97bf777
	I1026 01:13:06.051930  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:06.051938  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:06.051947  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:06.051959  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:06.051969  104058 round_trippers.go:580]     Content-Length: 1273
	I1026 01:13:06.052052  104058 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"standard","uid":"c2d16feb-001e-4983-a483-cffdd4363ec9","resourceVersion":"399","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1026 01:13:06.052382  104058 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c2d16feb-001e-4983-a483-cffdd4363ec9","resourceVersion":"399","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1026 01:13:06.052440  104058 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1026 01:13:06.052452  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:06.052465  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:06.052482  104058 round_trippers.go:473]     Content-Type: application/json
	I1026 01:13:06.052496  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:06.055440  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:06.055461  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:06.055471  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:06.055479  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:06.055488  104058 round_trippers.go:580]     Content-Length: 1220
	I1026 01:13:06.055501  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:06 GMT
	I1026 01:13:06.055513  104058 round_trippers.go:580]     Audit-Id: 8d669029-9433-4148-aa02-06b1eace2470
	I1026 01:13:06.055526  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:06.055538  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:06.055572  104058 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c2d16feb-001e-4983-a483-cffdd4363ec9","resourceVersion":"399","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1026 01:13:06.057761  104058 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1026 01:13:06.059379  104058 addons.go:502] enable addons completed in 1.289669617s: enabled=[storage-provisioner default-storageclass]
	I1026 01:13:06.100711  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:06.100740  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:06.100750  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:06.100759  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:06.104294  104058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:13:06.104316  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:06.104327  104058 round_trippers.go:580]     Audit-Id: 19a3acfc-bc1f-4265-ad15-07892dfac5d4
	I1026 01:13:06.104337  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:06.104344  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:06.104352  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:06.104360  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:06.104369  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:06 GMT
	I1026 01:13:06.104554  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:06.600078  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:06.600102  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:06.600110  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:06.600115  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:06.602464  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:06.602490  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:06.602498  104058 round_trippers.go:580]     Audit-Id: af2ec009-5562-4edb-b864-3f2929f239ad
	I1026 01:13:06.602506  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:06.602514  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:06.602522  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:06.602557  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:06.602569  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:06 GMT
	I1026 01:13:06.602733  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:07.100267  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:07.100297  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:07.100309  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:07.100318  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:07.102640  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:07.102662  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:07.102672  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:07 GMT
	I1026 01:13:07.102679  104058 round_trippers.go:580]     Audit-Id: 88ef1fcc-67d7-457c-8608-fa5a777d4134
	I1026 01:13:07.102686  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:07.102693  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:07.102700  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:07.102708  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:07.102841  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:07.103156  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:07.600370  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:07.600413  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:07.600421  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:07.600427  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:07.602875  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:07.602901  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:07.602911  104058 round_trippers.go:580]     Audit-Id: 28c791fe-d042-4d4b-8a47-d37a0c1f2085
	I1026 01:13:07.602918  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:07.602925  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:07.602933  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:07.602941  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:07.602950  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:07 GMT
	I1026 01:13:07.603126  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:08.100902  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:08.100928  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:08.100935  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:08.100941  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:08.103257  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:08.103283  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:08.103293  104058 round_trippers.go:580]     Audit-Id: f90bf0ef-6e1a-4531-9313-88b1a05cc5da
	I1026 01:13:08.103302  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:08.103311  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:08.103318  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:08.103329  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:08.103349  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:08 GMT
	I1026 01:13:08.103476  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:08.600043  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:08.600067  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:08.600076  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:08.600082  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:08.602385  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:08.602404  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:08.602411  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:08 GMT
	I1026 01:13:08.602417  104058 round_trippers.go:580]     Audit-Id: e328698f-176e-40ba-9e93-3168f46a6271
	I1026 01:13:08.602422  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:08.602442  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:08.602450  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:08.602459  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:08.602605  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:09.100140  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:09.100162  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:09.100170  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:09.100176  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:09.102595  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:09.102616  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:09.102623  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:09.102628  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:09.102633  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:09.102638  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:09.102643  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:09 GMT
	I1026 01:13:09.102648  104058 round_trippers.go:580]     Audit-Id: 3e91d8f7-5c36-403d-8a60-0443c4c888d1
	I1026 01:13:09.102801  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:09.600392  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:09.600414  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:09.600422  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:09.600428  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:09.602787  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:09.602808  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:09.602815  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:09.602824  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:09 GMT
	I1026 01:13:09.602832  104058 round_trippers.go:580]     Audit-Id: f1a1e0ff-b9b2-4ec7-970e-339e054afee6
	I1026 01:13:09.602840  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:09.602852  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:09.602862  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:09.602976  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:09.603270  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:10.100588  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:10.100611  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:10.100619  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:10.100624  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:10.102873  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:10.102894  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:10.102905  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:10 GMT
	I1026 01:13:10.102916  104058 round_trippers.go:580]     Audit-Id: e28c5f21-6438-41b7-9d2e-c38b0399abf8
	I1026 01:13:10.102924  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:10.102937  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:10.102945  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:10.102965  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:10.103074  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:10.600929  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:10.600954  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:10.600962  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:10.600968  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:10.603373  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:10.603400  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:10.603409  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:10.603418  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:10.603426  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:10.603433  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:10 GMT
	I1026 01:13:10.603441  104058 round_trippers.go:580]     Audit-Id: 58063bef-de8e-460f-a574-79cdee4a2ccd
	I1026 01:13:10.603453  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:10.603640  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:11.100110  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:11.100135  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:11.100143  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:11.100149  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:11.102491  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:11.102515  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:11.102522  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:11.102528  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:11.102533  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:11 GMT
	I1026 01:13:11.102538  104058 round_trippers.go:580]     Audit-Id: 5aed085a-5d28-4331-bbee-de95bbc84903
	I1026 01:13:11.102544  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:11.102554  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:11.102691  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:11.600203  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:11.600248  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:11.600256  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:11.600265  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:11.602740  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:11.602760  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:11.602782  104058 round_trippers.go:580]     Audit-Id: 28f40e0f-3da3-40f3-8eea-b26e606060a3
	I1026 01:13:11.602788  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:11.602794  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:11.602799  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:11.602805  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:11.602812  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:11 GMT
	I1026 01:13:11.602956  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:12.100606  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:12.100637  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:12.100649  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:12.100659  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:12.103107  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:12.103133  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:12.103142  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:12.103150  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:12.103158  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:12 GMT
	I1026 01:13:12.103165  104058 round_trippers.go:580]     Audit-Id: 35b166b3-1657-43d7-b6e8-79ef0d177b0c
	I1026 01:13:12.103174  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:12.103185  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:12.103282  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:12.103592  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:12.600956  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:12.600979  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:12.600993  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:12.601007  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:12.603349  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:12.603368  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:12.603383  104058 round_trippers.go:580]     Audit-Id: 8a335cc3-3757-43fd-941f-93294c929557
	I1026 01:13:12.603394  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:12.603401  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:12.603409  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:12.603418  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:12.603427  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:12 GMT
	I1026 01:13:12.603548  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:13.100122  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:13.100149  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:13.100160  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:13.100168  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:13.102496  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:13.102514  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:13.102521  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:13 GMT
	I1026 01:13:13.102526  104058 round_trippers.go:580]     Audit-Id: eda687c6-466d-43c0-b739-ae0aadfc8050
	I1026 01:13:13.102531  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:13.102537  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:13.102542  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:13.102550  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:13.102801  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:13.600333  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:13.600354  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:13.600362  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:13.600368  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:13.602555  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:13.602580  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:13.602592  104058 round_trippers.go:580]     Audit-Id: b54bf582-5972-4821-a501-7f7cc6f30eed
	I1026 01:13:13.602601  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:13.602610  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:13.602618  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:13.602627  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:13.602636  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:13 GMT
	I1026 01:13:13.602738  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:14.100263  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:14.100287  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:14.100295  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:14.100301  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:14.102658  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:14.102685  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:14.102695  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:14 GMT
	I1026 01:13:14.102703  104058 round_trippers.go:580]     Audit-Id: 68154e1e-997a-4b6a-a187-c513c0801ab0
	I1026 01:13:14.102710  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:14.102719  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:14.102729  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:14.102744  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:14.102886  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:14.600319  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:14.600340  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:14.600349  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:14.600354  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:14.602659  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:14.602681  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:14.602691  104058 round_trippers.go:580]     Audit-Id: 87813da0-f69e-4a43-843b-7ae3ca4a0e05
	I1026 01:13:14.602698  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:14.602709  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:14.602716  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:14.602726  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:14.602738  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:14 GMT
	I1026 01:13:14.602840  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:14.603163  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:15.100690  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:15.100714  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:15.100722  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:15.100728  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:15.103145  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:15.103169  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:15.103178  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:15.103187  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:15 GMT
	I1026 01:13:15.103196  104058 round_trippers.go:580]     Audit-Id: 60a4cff1-5888-434b-9fe3-aafbd7032ec6
	I1026 01:13:15.103204  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:15.103212  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:15.103223  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:15.103370  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:15.600004  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:15.600039  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:15.600047  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:15.600053  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:15.602481  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:15.602501  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:15.602511  104058 round_trippers.go:580]     Audit-Id: dfc46581-3f76-4599-8dcc-fc6f4ef5b1dc
	I1026 01:13:15.602516  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:15.602521  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:15.602528  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:15.602533  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:15.602538  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:15 GMT
	I1026 01:13:15.602654  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:16.100216  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:16.100251  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:16.100262  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:16.100271  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:16.102729  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:16.102747  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:16.102754  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:16.102759  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:16.102765  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:16.102770  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:16 GMT
	I1026 01:13:16.102775  104058 round_trippers.go:580]     Audit-Id: ed8f858f-d1be-41bd-aef2-0742998f8290
	I1026 01:13:16.102780  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:16.102951  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:16.600567  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:16.600589  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:16.600599  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:16.600604  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:16.602867  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:16.602897  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:16.602904  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:16 GMT
	I1026 01:13:16.602909  104058 round_trippers.go:580]     Audit-Id: bd70365c-4725-4113-b31e-b0e6ac0e840a
	I1026 01:13:16.602914  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:16.602919  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:16.602924  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:16.602929  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:16.603063  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:16.603355  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:17.100728  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:17.100752  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:17.100760  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:17.100766  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:17.103079  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:17.103110  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:17.103126  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:17.103135  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:17.103143  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:17 GMT
	I1026 01:13:17.103151  104058 round_trippers.go:580]     Audit-Id: 955c14cf-5e5e-4873-9b9e-3f37a57818fc
	I1026 01:13:17.103163  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:17.103171  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:17.103295  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:17.600954  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:17.600988  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:17.601001  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:17.601010  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:17.603402  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:17.603428  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:17.603438  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:17.603446  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:17 GMT
	I1026 01:13:17.603457  104058 round_trippers.go:580]     Audit-Id: b608dc27-82f5-4029-b108-3a89c0e25518
	I1026 01:13:17.603464  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:17.603472  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:17.603480  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:17.603588  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:18.100141  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:18.100165  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:18.100173  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:18.100179  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:18.102559  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:18.102585  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:18.102594  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:18.102602  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:18.102610  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:18.102620  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:18 GMT
	I1026 01:13:18.102633  104058 round_trippers.go:580]     Audit-Id: c5a2ac38-1d83-4773-9f49-69cb0d01849d
	I1026 01:13:18.102644  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:18.102751  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:18.600904  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:18.600929  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:18.600939  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:18.600947  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:18.603083  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:18.603110  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:18.603119  104058 round_trippers.go:580]     Audit-Id: 4084f26e-9139-45b0-9efe-73472b889755
	I1026 01:13:18.603125  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:18.603130  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:18.603135  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:18.603140  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:18.603145  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:18 GMT
	I1026 01:13:18.603341  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:18.603690  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:19.100992  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:19.101012  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:19.101021  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:19.101027  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:19.103438  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:19.103463  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:19.103470  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:19.103478  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:19.103487  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:19 GMT
	I1026 01:13:19.103495  104058 round_trippers.go:580]     Audit-Id: 2547cd48-c327-4d3b-9806-7455c01edd8c
	I1026 01:13:19.103504  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:19.103514  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:19.103625  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:19.600119  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:19.600142  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:19.600150  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:19.600157  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:19.602604  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:19.602622  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:19.602630  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:19.602635  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:19 GMT
	I1026 01:13:19.602643  104058 round_trippers.go:580]     Audit-Id: 085d6dfc-4eec-4254-8736-ca00e6fdf17c
	I1026 01:13:19.602651  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:19.602662  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:19.602673  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:19.602823  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:20.100049  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:20.100073  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:20.100081  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:20.100087  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:20.102399  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:20.102431  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:20.102443  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:20.102451  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:20.102459  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:20.102467  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:20.102484  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:20 GMT
	I1026 01:13:20.102494  104058 round_trippers.go:580]     Audit-Id: ce376a29-b2a6-4f1f-b3de-475e83598a1d
	I1026 01:13:20.102706  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:20.600690  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:20.600719  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:20.600731  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:20.600740  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:20.603039  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:20.603064  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:20.603072  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:20.603078  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:20.603086  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:20 GMT
	I1026 01:13:20.603094  104058 round_trippers.go:580]     Audit-Id: 8f52e699-0c8d-4524-9250-be32e30357e0
	I1026 01:13:20.603102  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:20.603110  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:20.603300  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:21.100849  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:21.100892  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:21.100903  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:21.100910  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:21.103288  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:21.103313  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:21.103323  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:21.103331  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:21 GMT
	I1026 01:13:21.103339  104058 round_trippers.go:580]     Audit-Id: e00758e3-ecc8-4201-ad89-7101ef5311a2
	I1026 01:13:21.103348  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:21.103360  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:21.103373  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:21.103529  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:21.103874  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:21.600090  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:21.600111  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:21.600122  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:21.600130  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:21.602541  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:21.602567  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:21.602580  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:21.602588  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:21.602597  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:21 GMT
	I1026 01:13:21.602606  104058 round_trippers.go:580]     Audit-Id: 14575344-2da5-40a2-b5a7-42bccb961925
	I1026 01:13:21.602616  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:21.602624  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:21.602799  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:22.100303  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:22.100327  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:22.100335  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:22.100341  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:22.102716  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:22.102744  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:22.102755  104058 round_trippers.go:580]     Audit-Id: 8ae98f0d-4edb-41c3-9034-c5b6932bc610
	I1026 01:13:22.102764  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:22.102772  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:22.102777  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:22.102782  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:22.102787  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:22 GMT
	I1026 01:13:22.102924  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:22.600091  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:22.600113  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:22.600121  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:22.600126  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:22.602493  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:22.602513  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:22.602522  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:22.602530  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:22.602537  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:22.602544  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:22.602551  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:22 GMT
	I1026 01:13:22.602559  104058 round_trippers.go:580]     Audit-Id: 47d1a17e-06b9-4567-8d7e-8dd7fcc44b18
	I1026 01:13:22.602688  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:23.100255  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:23.100279  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:23.100287  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:23.100293  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:23.102741  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:23.102763  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:23.102772  104058 round_trippers.go:580]     Audit-Id: 46d52b29-3bb2-46a4-a0ea-caee3996f30f
	I1026 01:13:23.102780  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:23.102787  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:23.102795  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:23.102802  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:23.102811  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:23 GMT
	I1026 01:13:23.102960  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:23.600500  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:23.600524  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:23.600532  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:23.600538  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:23.602897  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:23.602921  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:23.602930  104058 round_trippers.go:580]     Audit-Id: 5b8025e8-96b0-498a-99f9-807e3166fba0
	I1026 01:13:23.602944  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:23.602956  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:23.602969  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:23.602978  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:23.602987  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:23 GMT
	I1026 01:13:23.603107  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:23.603441  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:24.100721  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:24.100748  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:24.100757  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:24.100763  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:24.103185  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:24.103208  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:24.103216  104058 round_trippers.go:580]     Audit-Id: f6135151-0c04-483d-b749-9210dc43d584
	I1026 01:13:24.103222  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:24.103230  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:24.103238  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:24.103247  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:24.103257  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:24 GMT
	I1026 01:13:24.103372  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:24.601051  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:24.601072  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:24.601081  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:24.601087  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:24.603552  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:24.603576  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:24.603585  104058 round_trippers.go:580]     Audit-Id: ad21aeb7-0d72-43cc-901d-124284cdf9c2
	I1026 01:13:24.603594  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:24.603603  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:24.603613  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:24.603626  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:24.603635  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:24 GMT
	I1026 01:13:24.603800  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:25.100479  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:25.100504  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:25.100515  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:25.100524  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:25.102854  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:25.102877  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:25.102884  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:25 GMT
	I1026 01:13:25.102890  104058 round_trippers.go:580]     Audit-Id: 3ea1268d-38ed-40c4-8451-128e08babe93
	I1026 01:13:25.102895  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:25.102900  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:25.102905  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:25.102910  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:25.103038  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:25.600872  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:25.600913  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:25.600922  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:25.600927  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:25.603471  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:25.603495  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:25.603502  104058 round_trippers.go:580]     Audit-Id: 202493dd-a67d-49c7-906b-b3e6033a358e
	I1026 01:13:25.603507  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:25.603512  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:25.603518  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:25.603523  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:25.603529  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:25 GMT
	I1026 01:13:25.603697  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:25.604211  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:26.100319  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:26.100344  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:26.100353  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:26.100359  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:26.102691  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:26.102710  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:26.102719  104058 round_trippers.go:580]     Audit-Id: da4a246e-167a-4ce8-ab4e-5a6840236c78
	I1026 01:13:26.102731  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:26.102742  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:26.102752  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:26.102765  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:26.102770  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:26 GMT
	I1026 01:13:26.102922  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:26.600375  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:26.600396  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:26.600404  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:26.600410  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:26.602898  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:26.602918  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:26.602924  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:26.602930  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:26.602935  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:26.602940  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:26.602948  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:26 GMT
	I1026 01:13:26.602957  104058 round_trippers.go:580]     Audit-Id: 99b398cb-8485-4753-ae27-f529c4e79662
	I1026 01:13:26.603120  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:27.100760  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:27.100782  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:27.100796  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:27.100802  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:27.103271  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:27.103296  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:27.103304  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:27.103312  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:27 GMT
	I1026 01:13:27.103321  104058 round_trippers.go:580]     Audit-Id: 18fd5482-7f05-4777-8b14-74234530f956
	I1026 01:13:27.103330  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:27.103339  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:27.103351  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:27.103470  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:27.599974  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:27.600013  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:27.600021  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:27.600028  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:27.602401  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:27.602421  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:27.602428  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:27.602434  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:27.602442  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:27 GMT
	I1026 01:13:27.602457  104058 round_trippers.go:580]     Audit-Id: 7360a8b7-92fb-4fb9-a43e-b11f1c00b24d
	I1026 01:13:27.602467  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:27.602484  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:27.602617  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:28.100194  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:28.100230  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:28.100241  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:28.100260  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:28.102621  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:28.102650  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:28.102661  104058 round_trippers.go:580]     Audit-Id: e72ac961-02d0-4513-a475-3e3eba9a5ce5
	I1026 01:13:28.102669  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:28.102676  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:28.102684  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:28.102691  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:28.102698  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:28 GMT
	I1026 01:13:28.102872  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:28.103267  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:28.600482  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:28.600508  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:28.600520  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:28.600528  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:28.602743  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:28.602762  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:28.602769  104058 round_trippers.go:580]     Audit-Id: 9113474a-d50c-4081-9931-3303778cc80c
	I1026 01:13:28.602774  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:28.602781  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:28.602788  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:28.602796  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:28.602804  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:28 GMT
	I1026 01:13:28.602953  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:29.100106  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:29.100131  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:29.100139  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:29.100146  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:29.102804  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:29.102827  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:29.102835  104058 round_trippers.go:580]     Audit-Id: 3d690566-b678-4427-a244-5fee6313df38
	I1026 01:13:29.102840  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:29.102845  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:29.102851  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:29.102864  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:29.102872  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:29 GMT
	I1026 01:13:29.103001  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:29.600755  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:29.600778  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:29.600786  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:29.600795  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:29.603036  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:29.603057  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:29.603064  104058 round_trippers.go:580]     Audit-Id: 18cdac04-3698-4adf-af4f-c577da66be5c
	I1026 01:13:29.603069  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:29.603074  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:29.603081  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:29.603090  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:29.603100  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:29 GMT
	I1026 01:13:29.603250  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:30.100902  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:30.100927  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:30.100937  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:30.100944  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:30.103283  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:30.103308  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:30.103315  104058 round_trippers.go:580]     Audit-Id: f2306028-1dd0-42cb-8039-009b48d678a8
	I1026 01:13:30.103326  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:30.103331  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:30.103337  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:30.103342  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:30.103347  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:30 GMT
	I1026 01:13:30.103457  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:30.103768  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:30.600657  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:30.600687  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:30.600695  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:30.600701  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:30.603173  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:30.603198  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:30.603206  104058 round_trippers.go:580]     Audit-Id: 8c720390-7530-4556-a5be-7f2e1ae65fa3
	I1026 01:13:30.603211  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:30.603216  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:30.603221  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:30.603226  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:30.603232  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:30 GMT
	I1026 01:13:30.603343  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:31.100943  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:31.100966  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:31.100975  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:31.100981  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:31.103646  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:31.103669  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:31.103680  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:31.103688  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:31.103696  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:31.103705  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:31 GMT
	I1026 01:13:31.103716  104058 round_trippers.go:580]     Audit-Id: 94ffc458-a161-4eee-be78-4be294547fb4
	I1026 01:13:31.103728  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:31.103897  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:31.600073  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:31.600099  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:31.600109  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:31.600118  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:31.602369  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:31.602401  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:31.602413  104058 round_trippers.go:580]     Audit-Id: 5215a879-2deb-436d-83e0-1699ab08385e
	I1026 01:13:31.602423  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:31.602434  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:31.602443  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:31.602453  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:31.602462  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:31 GMT
	I1026 01:13:31.602604  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:32.100098  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:32.100120  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:32.100127  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:32.100134  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:32.102485  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:32.102510  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:32.102520  104058 round_trippers.go:580]     Audit-Id: 3ee00724-245f-433f-bc14-62389f25a8e9
	I1026 01:13:32.102529  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:32.102538  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:32.102547  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:32.102555  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:32.102567  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:32 GMT
	I1026 01:13:32.102761  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:32.600946  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:32.600972  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:32.600990  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:32.600999  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:32.603595  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:32.603617  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:32.603624  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:32.603630  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:32.603635  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:32 GMT
	I1026 01:13:32.603641  104058 round_trippers.go:580]     Audit-Id: 6745524d-babb-4a3f-8502-c0b186446ee4
	I1026 01:13:32.603646  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:32.603653  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:32.603784  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:32.604136  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:33.100344  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:33.100371  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:33.100379  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:33.100385  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:33.102711  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:33.102741  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:33.102751  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:33.102757  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:33.102762  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:33.102768  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:33.102776  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:33 GMT
	I1026 01:13:33.102789  104058 round_trippers.go:580]     Audit-Id: 65906803-60a4-439c-9209-6565dab1d4f9
	I1026 01:13:33.102949  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:33.600581  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:33.600602  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:33.600612  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:33.600619  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:33.602921  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:33.602943  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:33.602950  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:33.602955  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:33.602960  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:33.602966  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:33 GMT
	I1026 01:13:33.602970  104058 round_trippers.go:580]     Audit-Id: dce61d95-8123-47c7-82f0-5eb17582d4a7
	I1026 01:13:33.602975  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:33.603127  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:34.100834  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:34.100856  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:34.100881  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:34.100887  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:34.103101  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:34.103120  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:34.103127  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:34.103132  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:34.103137  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:34 GMT
	I1026 01:13:34.103143  104058 round_trippers.go:580]     Audit-Id: fd6dc924-a631-4472-b0e2-0c53736da9af
	I1026 01:13:34.103150  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:34.103158  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:34.103277  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:34.600925  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:34.600951  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:34.600963  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:34.600972  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:34.603799  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:34.603820  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:34.603829  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:34.603837  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:34.603844  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:34.603852  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:34 GMT
	I1026 01:13:34.603863  104058 round_trippers.go:580]     Audit-Id: 98603659-a7a5-4c23-9389-fa6771907b28
	I1026 01:13:34.603876  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:34.603987  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:34.604289  104058 node_ready.go:58] node "multinode-204768" has status "Ready":"False"
	I1026 01:13:35.100847  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:35.100870  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:35.100878  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:35.100884  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:35.103142  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:35.103163  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:35.103173  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:35.103180  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:35.103188  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:35.103195  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:35.103203  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:35 GMT
	I1026 01:13:35.103212  104058 round_trippers.go:580]     Audit-Id: ad647797-666d-42cf-9f7c-75a59d0d80eb
	I1026 01:13:35.103329  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:35.600097  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:35.600122  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:35.600133  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:35.600142  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:35.602758  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:35.602782  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:35.602792  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:35.602801  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:35.602808  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:35.602817  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:35 GMT
	I1026 01:13:35.602825  104058 round_trippers.go:580]     Audit-Id: 97ed73ff-a2be-481d-8f41-5f10646d58bc
	I1026 01:13:35.602834  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:35.602945  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"355","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1026 01:13:36.100627  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:36.100661  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:36.100673  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:36.100682  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:36.103577  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:36.103603  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:36.103614  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:36.103622  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:36.103633  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:36.103642  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:36 GMT
	I1026 01:13:36.103652  104058 round_trippers.go:580]     Audit-Id: 4be642ea-8501-45e5-a119-aee2b60552f1
	I1026 01:13:36.103660  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:36.103796  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:13:36.104209  104058 node_ready.go:49] node "multinode-204768" has status "Ready":"True"
	I1026 01:13:36.104229  104058 node_ready.go:38] duration metric: took 31.085703161s waiting for node "multinode-204768" to be "Ready" ...
	I1026 01:13:36.104241  104058 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:13:36.104313  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1026 01:13:36.104324  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:36.104336  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:36.104349  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:36.108065  104058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:13:36.108122  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:36.108138  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:36.108147  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:36 GMT
	I1026 01:13:36.108158  104058 round_trippers.go:580]     Audit-Id: 860a971d-5f1d-48c1-a684-dee80ada0475
	I1026 01:13:36.108168  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:36.108179  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:36.108190  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:36.108603  104058 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dccqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"40c339fe-ec4b-429f-afa8-f305c33e4344","resourceVersion":"428","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1a2c8df7-8c76-46f4-b773-924c84f49f5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2c8df7-8c76-46f4-b773-924c84f49f5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55533 chars]
	I1026 01:13:36.112203  104058 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dccqq" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:36.112270  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dccqq
	I1026 01:13:36.112278  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:36.112285  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:36.112294  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:36.114589  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:36.114605  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:36.114611  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:36.114617  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:36.114622  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:36.114627  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:36.114632  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:36 GMT
	I1026 01:13:36.114639  104058 round_trippers.go:580]     Audit-Id: f4d51f1d-5ded-4f6b-8c2e-4929191356e5
	I1026 01:13:36.114787  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dccqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"40c339fe-ec4b-429f-afa8-f305c33e4344","resourceVersion":"428","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1a2c8df7-8c76-46f4-b773-924c84f49f5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2c8df7-8c76-46f4-b773-924c84f49f5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1026 01:13:36.115198  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:36.115210  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:36.115217  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:36.115223  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:36.117178  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:13:36.117193  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:36.117203  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:36.117212  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:36.117221  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:36.117233  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:36.117241  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:36 GMT
	I1026 01:13:36.117253  104058 round_trippers.go:580]     Audit-Id: 37e3e1d8-ee9a-4b1c-b3eb-8e0ee1b2fc92
	I1026 01:13:36.117417  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:13:36.117791  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dccqq
	I1026 01:13:36.117805  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:36.117812  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:36.117818  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:36.120101  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:36.120123  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:36.120133  104058 round_trippers.go:580]     Audit-Id: ee71c261-77de-4b83-b90e-93e06cce4506
	I1026 01:13:36.120142  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:36.120150  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:36.120159  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:36.120175  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:36.120183  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:36 GMT
	I1026 01:13:36.120304  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dccqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"40c339fe-ec4b-429f-afa8-f305c33e4344","resourceVersion":"428","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1a2c8df7-8c76-46f4-b773-924c84f49f5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2c8df7-8c76-46f4-b773-924c84f49f5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1026 01:13:36.120793  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:36.120809  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:36.120832  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:36.120844  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:36.123151  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:36.123170  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:36.123180  104058 round_trippers.go:580]     Audit-Id: a1dab733-21a5-40aa-992b-0debfd848f16
	I1026 01:13:36.123188  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:36.123197  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:36.123213  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:36.123227  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:36.123235  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:36 GMT
	I1026 01:13:36.123399  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:13:36.624034  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dccqq
	I1026 01:13:36.624056  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:36.624069  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:36.624079  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:36.626498  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:36.626525  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:36.626535  104058 round_trippers.go:580]     Audit-Id: 7cd13887-f240-409a-99f5-993cae3ce07d
	I1026 01:13:36.626544  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:36.626552  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:36.626559  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:36.626567  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:36.626580  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:36 GMT
	I1026 01:13:36.626714  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dccqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"40c339fe-ec4b-429f-afa8-f305c33e4344","resourceVersion":"439","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1a2c8df7-8c76-46f4-b773-924c84f49f5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2c8df7-8c76-46f4-b773-924c84f49f5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1026 01:13:36.627186  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:36.627198  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:36.627205  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:36.627211  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:36.629234  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:36.629258  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:36.629268  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:36.629278  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:36.629286  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:36.629295  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:36 GMT
	I1026 01:13:36.629312  104058 round_trippers.go:580]     Audit-Id: 924f5def-7eaf-45fc-9e19-3f875d19ad95
	I1026 01:13:36.629319  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:36.629462  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:13:37.123964  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dccqq
	I1026 01:13:37.123985  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:37.123993  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:37.123999  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:37.126478  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:37.126497  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:37.126504  104058 round_trippers.go:580]     Audit-Id: eb9b534b-13fb-4f9c-9e87-09d6b6860231
	I1026 01:13:37.126509  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:37.126514  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:37.126519  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:37.126524  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:37.126529  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:37 GMT
	I1026 01:13:37.126687  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dccqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"40c339fe-ec4b-429f-afa8-f305c33e4344","resourceVersion":"439","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1a2c8df7-8c76-46f4-b773-924c84f49f5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2c8df7-8c76-46f4-b773-924c84f49f5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1026 01:13:37.127141  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:37.127153  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:37.127161  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:37.127167  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:37.128965  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:13:37.128980  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:37.128990  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:37.128998  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:37.129008  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:37.129020  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:37.129027  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:37 GMT
	I1026 01:13:37.129033  104058 round_trippers.go:580]     Audit-Id: ad680b32-f244-4d57-a3fd-adac0f49f102
	I1026 01:13:37.129139  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:13:37.624789  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dccqq
	I1026 01:13:37.624813  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:37.624821  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:37.624827  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:37.627448  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:37.627466  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:37.627473  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:37.627478  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:37 GMT
	I1026 01:13:37.627483  104058 round_trippers.go:580]     Audit-Id: 72a0d108-d05d-4a3f-8f85-a9b7c83554b9
	I1026 01:13:37.627488  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:37.627493  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:37.627498  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:37.627643  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dccqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"40c339fe-ec4b-429f-afa8-f305c33e4344","resourceVersion":"442","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1a2c8df7-8c76-46f4-b773-924c84f49f5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2c8df7-8c76-46f4-b773-924c84f49f5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1026 01:13:37.628092  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:37.628104  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:37.628111  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:37.628117  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:37.629918  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:13:37.629933  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:37.629940  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:37.629946  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:37.629955  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:37.629964  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:37 GMT
	I1026 01:13:37.629975  104058 round_trippers.go:580]     Audit-Id: 547d5d3e-5e9b-42af-94cc-845d40e170c2
	I1026 01:13:37.629987  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:37.630084  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:13:37.630377  104058 pod_ready.go:92] pod "coredns-5dd5756b68-dccqq" in "kube-system" namespace has status "Ready":"True"
	I1026 01:13:37.630399  104058 pod_ready.go:81] duration metric: took 1.518174554s waiting for pod "coredns-5dd5756b68-dccqq" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:37.630407  104058 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:37.630449  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-204768
	I1026 01:13:37.630459  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:37.630466  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:37.630472  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:37.632152  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:13:37.632166  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:37.632174  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:37.632180  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:37.632185  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:37.632191  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:37 GMT
	I1026 01:13:37.632199  104058 round_trippers.go:580]     Audit-Id: e886e8fe-7985-4800-a572-99850e5ffda1
	I1026 01:13:37.632209  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:37.632385  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-204768","namespace":"kube-system","uid":"c9c95bc6-cbbf-4412-a34e-68fa705cebd3","resourceVersion":"313","creationTimestamp":"2023-10-26T01:12:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"e8d07c850007bf81e9202b3f7ccc144c","kubernetes.io/config.mirror":"e8d07c850007bf81e9202b3f7ccc144c","kubernetes.io/config.seen":"2023-10-26T01:12:46.401057131Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:12:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1026 01:13:37.632763  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:37.632776  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:37.632783  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:37.632791  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:37.634427  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:13:37.634448  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:37.634458  104058 round_trippers.go:580]     Audit-Id: efacd932-cccb-4b2a-98ef-b42ebe59fd32
	I1026 01:13:37.634467  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:37.634475  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:37.634484  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:37.634494  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:37.634510  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:37 GMT
	I1026 01:13:37.634616  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:13:37.634934  104058 pod_ready.go:92] pod "etcd-multinode-204768" in "kube-system" namespace has status "Ready":"True"
	I1026 01:13:37.634948  104058 pod_ready.go:81] duration metric: took 4.536352ms waiting for pod "etcd-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:37.634959  104058 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:37.635014  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-204768
	I1026 01:13:37.635022  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:37.635029  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:37.635035  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:37.636593  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:13:37.636608  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:37.636617  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:37 GMT
	I1026 01:13:37.636626  104058 round_trippers.go:580]     Audit-Id: 289c6254-651e-472a-9cdb-a2adc693e752
	I1026 01:13:37.636639  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:37.636648  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:37.636660  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:37.636671  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:37.636793  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-204768","namespace":"kube-system","uid":"996138a2-c8e3-473f-8adc-cea5c13e9400","resourceVersion":"315","creationTimestamp":"2023-10-26T01:12:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"4fdd3118b5471cf161cd04b0bf3d7dfa","kubernetes.io/config.mirror":"4fdd3118b5471cf161cd04b0bf3d7dfa","kubernetes.io/config.seen":"2023-10-26T01:12:52.092072793Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:12:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1026 01:13:37.637158  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:37.637168  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:37.637175  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:37.637181  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:37.638750  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:13:37.638768  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:37.638777  104058 round_trippers.go:580]     Audit-Id: 72b2cdf6-87b2-4e59-a37a-afeed4080819
	I1026 01:13:37.638784  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:37.638798  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:37.638809  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:37.638819  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:37.638830  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:37 GMT
	I1026 01:13:37.638932  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:13:37.639182  104058 pod_ready.go:92] pod "kube-apiserver-multinode-204768" in "kube-system" namespace has status "Ready":"True"
	I1026 01:13:37.639194  104058 pod_ready.go:81] duration metric: took 4.223787ms waiting for pod "kube-apiserver-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:37.639202  104058 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:37.639241  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-204768
	I1026 01:13:37.639249  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:37.639255  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:37.639261  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:37.640738  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:13:37.640753  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:37.640759  104058 round_trippers.go:580]     Audit-Id: aa7a0684-c613-4df6-8843-8f7bffe184dd
	I1026 01:13:37.640765  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:37.640774  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:37.640783  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:37.640791  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:37.640802  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:37 GMT
	I1026 01:13:37.640929  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-204768","namespace":"kube-system","uid":"29d45769-f580-4533-b706-49744a365a37","resourceVersion":"319","creationTimestamp":"2023-10-26T01:12:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c5ab0e7c91688fbde32e6aea37a6a4f1","kubernetes.io/config.mirror":"c5ab0e7c91688fbde32e6aea37a6a4f1","kubernetes.io/config.seen":"2023-10-26T01:12:52.092074804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:12:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1026 01:13:37.701616  104058 request.go:629] Waited for 60.218519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:37.701719  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:37.701731  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:37.701741  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:37.701751  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:37.704076  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:37.704101  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:37.704108  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:37 GMT
	I1026 01:13:37.704113  104058 round_trippers.go:580]     Audit-Id: 4d32906a-ed73-466c-a7a2-31191ea5105d
	I1026 01:13:37.704118  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:37.704123  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:37.704129  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:37.704134  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:37.704288  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:13:37.704596  104058 pod_ready.go:92] pod "kube-controller-manager-multinode-204768" in "kube-system" namespace has status "Ready":"True"
	I1026 01:13:37.704611  104058 pod_ready.go:81] duration metric: took 65.402601ms waiting for pod "kube-controller-manager-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:37.704622  104058 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hkfhh" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:37.901080  104058 request.go:629] Waited for 196.373565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfhh
	I1026 01:13:37.901152  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfhh
	I1026 01:13:37.901164  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:37.901176  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:37.901187  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:37.903488  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:37.903513  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:37.903523  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:37.903531  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:37 GMT
	I1026 01:13:37.903540  104058 round_trippers.go:580]     Audit-Id: 476d5781-5ab4-4c82-ac31-6fc34f7a75d0
	I1026 01:13:37.903549  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:37.903558  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:37.903567  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:37.903788  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hkfhh","generateName":"kube-proxy-","namespace":"kube-system","uid":"1fb5ef2f-82a6-48b5-bb1b-9f7461ed90ed","resourceVersion":"376","creationTimestamp":"2023-10-26T01:13:04Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f5eb1b01-7f36-41da-8e2b-7cffcba996d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f5eb1b01-7f36-41da-8e2b-7cffcba996d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1026 01:13:38.101585  104058 request.go:629] Waited for 197.340567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:38.101639  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:38.101643  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:38.101651  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:38.101657  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:38.103882  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:38.103903  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:38.103912  104058 round_trippers.go:580]     Audit-Id: f3841a62-b631-470e-8231-f4a044f43bcf
	I1026 01:13:38.103920  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:38.103927  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:38.103935  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:38.103947  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:38.103954  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:38 GMT
	I1026 01:13:38.104114  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:13:38.104498  104058 pod_ready.go:92] pod "kube-proxy-hkfhh" in "kube-system" namespace has status "Ready":"True"
	I1026 01:13:38.104517  104058 pod_ready.go:81] duration metric: took 399.888131ms waiting for pod "kube-proxy-hkfhh" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:38.104530  104058 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:38.301003  104058 request.go:629] Waited for 196.397389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-204768
	I1026 01:13:38.301062  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-204768
	I1026 01:13:38.301067  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:38.301076  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:38.301082  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:38.303199  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:38.303219  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:38.303241  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:38.303249  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:38.303257  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:38.303269  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:38.303282  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:38 GMT
	I1026 01:13:38.303291  104058 round_trippers.go:580]     Audit-Id: 6873a580-a84d-4ec9-aa71-235799eebd99
	I1026 01:13:38.303450  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-204768","namespace":"kube-system","uid":"9760c99d-332a-47cd-87ba-bb616722ecef","resourceVersion":"410","creationTimestamp":"2023-10-26T01:12:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"979980edfd50477450614c13b844007d","kubernetes.io/config.mirror":"979980edfd50477450614c13b844007d","kubernetes.io/config.seen":"2023-10-26T01:12:52.092064230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:12:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1026 01:13:38.501219  104058 request.go:629] Waited for 197.345246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:38.501277  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:13:38.501282  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:38.501289  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:38.501296  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:38.503902  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:38.503929  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:38.503942  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:38.503950  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:38.503958  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:38.503969  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:38 GMT
	I1026 01:13:38.503981  104058 round_trippers.go:580]     Audit-Id: e134fa12-07b8-4b93-b1ff-c9427e3c53d3
	I1026 01:13:38.503993  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:38.504132  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:13:38.504569  104058 pod_ready.go:92] pod "kube-scheduler-multinode-204768" in "kube-system" namespace has status "Ready":"True"
	I1026 01:13:38.504586  104058 pod_ready.go:81] duration metric: took 400.047928ms waiting for pod "kube-scheduler-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:13:38.504610  104058 pod_ready.go:38] duration metric: took 2.400354817s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:13:38.504633  104058 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:13:38.504695  104058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:13:38.514245  104058 command_runner.go:130] > 1454
	I1026 01:13:38.514970  104058 api_server.go:72] duration metric: took 33.7074113s to wait for apiserver process to appear ...
	I1026 01:13:38.514992  104058 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:13:38.515012  104058 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1026 01:13:38.519863  104058 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1026 01:13:38.519934  104058 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1026 01:13:38.519943  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:38.519950  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:38.519959  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:38.520898  104058 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1026 01:13:38.520913  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:38.520922  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:38.520934  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:38.520944  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:38.520955  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:38.520965  104058 round_trippers.go:580]     Content-Length: 264
	I1026 01:13:38.520977  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:38 GMT
	I1026 01:13:38.520985  104058 round_trippers.go:580]     Audit-Id: d12a7116-9ace-4028-85ed-bc5d68ee0d15
	I1026 01:13:38.521004  104058 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1026 01:13:38.521105  104058 api_server.go:141] control plane version: v1.28.3
	I1026 01:13:38.521125  104058 api_server.go:131] duration metric: took 6.125867ms to wait for apiserver health ...
	I1026 01:13:38.521133  104058 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 01:13:38.701590  104058 request.go:629] Waited for 180.368004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1026 01:13:38.701638  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1026 01:13:38.701643  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:38.701651  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:38.701658  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:38.705102  104058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:13:38.705123  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:38.705130  104058 round_trippers.go:580]     Audit-Id: 6b826370-2087-45f4-a4ca-c1f6d4063fce
	I1026 01:13:38.705136  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:38.705141  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:38.705146  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:38.705152  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:38.705158  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:38 GMT
	I1026 01:13:38.705605  104058 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dccqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"40c339fe-ec4b-429f-afa8-f305c33e4344","resourceVersion":"442","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1a2c8df7-8c76-46f4-b773-924c84f49f5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2c8df7-8c76-46f4-b773-924c84f49f5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1026 01:13:38.707312  104058 system_pods.go:59] 8 kube-system pods found
	I1026 01:13:38.707333  104058 system_pods.go:61] "coredns-5dd5756b68-dccqq" [40c339fe-ec4b-429f-afa8-f305c33e4344] Running
	I1026 01:13:38.707338  104058 system_pods.go:61] "etcd-multinode-204768" [c9c95bc6-cbbf-4412-a34e-68fa705cebd3] Running
	I1026 01:13:38.707342  104058 system_pods.go:61] "kindnet-9jtfh" [41219a25-2f31-49f2-a776-52d56ecfb4cf] Running
	I1026 01:13:38.707346  104058 system_pods.go:61] "kube-apiserver-multinode-204768" [996138a2-c8e3-473f-8adc-cea5c13e9400] Running
	I1026 01:13:38.707350  104058 system_pods.go:61] "kube-controller-manager-multinode-204768" [29d45769-f580-4533-b706-49744a365a37] Running
	I1026 01:13:38.707354  104058 system_pods.go:61] "kube-proxy-hkfhh" [1fb5ef2f-82a6-48b5-bb1b-9f7461ed90ed] Running
	I1026 01:13:38.707358  104058 system_pods.go:61] "kube-scheduler-multinode-204768" [9760c99d-332a-47cd-87ba-bb616722ecef] Running
	I1026 01:13:38.707364  104058 system_pods.go:61] "storage-provisioner" [7d126e64-5bdb-4415-a095-5d9411bdfb3d] Running
	I1026 01:13:38.707380  104058 system_pods.go:74] duration metric: took 186.242397ms to wait for pod list to return data ...
	I1026 01:13:38.707387  104058 default_sa.go:34] waiting for default service account to be created ...
	I1026 01:13:38.900735  104058 request.go:629] Waited for 193.283521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:13:38.900803  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:13:38.900808  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:38.900815  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:38.900822  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:38.903324  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:38.903346  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:38.903354  104058 round_trippers.go:580]     Audit-Id: a06f2218-bf19-457d-a11b-411ac2b1bea6
	I1026 01:13:38.903360  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:38.903365  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:38.903370  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:38.903375  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:38.903381  104058 round_trippers.go:580]     Content-Length: 261
	I1026 01:13:38.903386  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:38 GMT
	I1026 01:13:38.903407  104058 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"fcd3b642-2db5-4513-b656-009103f0fa3a","resourceVersion":"351","creationTimestamp":"2023-10-26T01:13:04Z"}}]}
	I1026 01:13:38.903609  104058 default_sa.go:45] found service account: "default"
	I1026 01:13:38.903624  104058 default_sa.go:55] duration metric: took 196.230015ms for default service account to be created ...
	I1026 01:13:38.903634  104058 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 01:13:39.101122  104058 request.go:629] Waited for 197.425162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1026 01:13:39.101192  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1026 01:13:39.101197  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:39.101205  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:39.101211  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:39.104621  104058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:13:39.104646  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:39.104655  104058 round_trippers.go:580]     Audit-Id: 410aa61c-262f-4eda-9e16-57f574582b86
	I1026 01:13:39.104660  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:39.104669  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:39.104678  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:39.104687  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:39.104696  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:39 GMT
	I1026 01:13:39.105275  104058 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dccqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"40c339fe-ec4b-429f-afa8-f305c33e4344","resourceVersion":"442","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1a2c8df7-8c76-46f4-b773-924c84f49f5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2c8df7-8c76-46f4-b773-924c84f49f5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1026 01:13:39.107011  104058 system_pods.go:86] 8 kube-system pods found
	I1026 01:13:39.107032  104058 system_pods.go:89] "coredns-5dd5756b68-dccqq" [40c339fe-ec4b-429f-afa8-f305c33e4344] Running
	I1026 01:13:39.107038  104058 system_pods.go:89] "etcd-multinode-204768" [c9c95bc6-cbbf-4412-a34e-68fa705cebd3] Running
	I1026 01:13:39.107042  104058 system_pods.go:89] "kindnet-9jtfh" [41219a25-2f31-49f2-a776-52d56ecfb4cf] Running
	I1026 01:13:39.107046  104058 system_pods.go:89] "kube-apiserver-multinode-204768" [996138a2-c8e3-473f-8adc-cea5c13e9400] Running
	I1026 01:13:39.107059  104058 system_pods.go:89] "kube-controller-manager-multinode-204768" [29d45769-f580-4533-b706-49744a365a37] Running
	I1026 01:13:39.107066  104058 system_pods.go:89] "kube-proxy-hkfhh" [1fb5ef2f-82a6-48b5-bb1b-9f7461ed90ed] Running
	I1026 01:13:39.107072  104058 system_pods.go:89] "kube-scheduler-multinode-204768" [9760c99d-332a-47cd-87ba-bb616722ecef] Running
	I1026 01:13:39.107076  104058 system_pods.go:89] "storage-provisioner" [7d126e64-5bdb-4415-a095-5d9411bdfb3d] Running
	I1026 01:13:39.107087  104058 system_pods.go:126] duration metric: took 203.445491ms to wait for k8s-apps to be running ...
	I1026 01:13:39.107096  104058 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 01:13:39.107140  104058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:13:39.119360  104058 system_svc.go:56] duration metric: took 12.25228ms WaitForService to wait for kubelet.
	I1026 01:13:39.119391  104058 kubeadm.go:581] duration metric: took 34.311836189s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1026 01:13:39.119416  104058 node_conditions.go:102] verifying NodePressure condition ...
	I1026 01:13:39.300763  104058 request.go:629] Waited for 181.268344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1026 01:13:39.300827  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1026 01:13:39.300832  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:39.300840  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:39.300846  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:39.303294  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:39.303314  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:39.303325  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:39.303334  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:39 GMT
	I1026 01:13:39.303346  104058 round_trippers.go:580]     Audit-Id: 9535bd4e-78db-4062-837e-385e232db1e7
	I1026 01:13:39.303355  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:39.303376  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:39.303383  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:39.303491  104058 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I1026 01:13:39.303881  104058 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 01:13:39.303899  104058 node_conditions.go:123] node cpu capacity is 8
	I1026 01:13:39.303909  104058 node_conditions.go:105] duration metric: took 184.488351ms to run NodePressure ...
	I1026 01:13:39.303919  104058 start.go:228] waiting for startup goroutines ...
	I1026 01:13:39.303928  104058 start.go:233] waiting for cluster config update ...
	I1026 01:13:39.303937  104058 start.go:242] writing updated cluster config ...
	I1026 01:13:39.306756  104058 out.go:177] 
	I1026 01:13:39.309356  104058 config.go:182] Loaded profile config "multinode-204768": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:13:39.309424  104058 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/config.json ...
	I1026 01:13:39.311267  104058 out.go:177] * Starting worker node multinode-204768-m02 in cluster multinode-204768
	I1026 01:13:39.312632  104058 cache.go:121] Beginning downloading kic base image for docker with crio
	I1026 01:13:39.314265  104058 out.go:177] * Pulling base image ...
	I1026 01:13:39.316186  104058 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 01:13:39.316218  104058 cache.go:56] Caching tarball of preloaded images
	I1026 01:13:39.316286  104058 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1026 01:13:39.316323  104058 preload.go:174] Found /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:13:39.316340  104058 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1026 01:13:39.316432  104058 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/config.json ...
	I1026 01:13:39.332515  104058 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1026 01:13:39.332537  104058 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1026 01:13:39.332556  104058 cache.go:194] Successfully downloaded all kic artifacts
	I1026 01:13:39.332590  104058 start.go:365] acquiring machines lock for multinode-204768-m02: {Name:mk29e439dd6dcffe8857803db1bc2a0f98a8dc92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:13:39.332702  104058 start.go:369] acquired machines lock for "multinode-204768-m02" in 89.556µs
	I1026 01:13:39.332725  104058 start.go:93] Provisioning new machine with config: &{Name:multinode-204768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-204768 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1026 01:13:39.332819  104058 start.go:125] createHost starting for "m02" (driver="docker")
	I1026 01:13:39.335093  104058 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1026 01:13:39.335205  104058 start.go:159] libmachine.API.Create for "multinode-204768" (driver="docker")
	I1026 01:13:39.335223  104058 client.go:168] LocalClient.Create starting
	I1026 01:13:39.335286  104058 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem
	I1026 01:13:39.335317  104058 main.go:141] libmachine: Decoding PEM data...
	I1026 01:13:39.335329  104058 main.go:141] libmachine: Parsing certificate...
	I1026 01:13:39.335375  104058 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem
	I1026 01:13:39.335392  104058 main.go:141] libmachine: Decoding PEM data...
	I1026 01:13:39.335403  104058 main.go:141] libmachine: Parsing certificate...
	I1026 01:13:39.335565  104058 cli_runner.go:164] Run: docker network inspect multinode-204768 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 01:13:39.350680  104058 network_create.go:77] Found existing network {name:multinode-204768 subnet:0xc002ef5b90 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1026 01:13:39.350729  104058 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-204768-m02" container
	I1026 01:13:39.350782  104058 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 01:13:39.366622  104058 cli_runner.go:164] Run: docker volume create multinode-204768-m02 --label name.minikube.sigs.k8s.io=multinode-204768-m02 --label created_by.minikube.sigs.k8s.io=true
	I1026 01:13:39.382624  104058 oci.go:103] Successfully created a docker volume multinode-204768-m02
	I1026 01:13:39.382698  104058 cli_runner.go:164] Run: docker run --rm --name multinode-204768-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-204768-m02 --entrypoint /usr/bin/test -v multinode-204768-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1026 01:13:39.893409  104058 oci.go:107] Successfully prepared a docker volume multinode-204768-m02
	I1026 01:13:39.893442  104058 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 01:13:39.893462  104058 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 01:13:39.893528  104058 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-204768-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 01:13:44.996156  104058 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-204768-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.102566531s)
	I1026 01:13:44.996189  104058 kic.go:203] duration metric: took 5.102725 seconds to extract preloaded images to volume
	W1026 01:13:44.996327  104058 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 01:13:44.996453  104058 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 01:13:45.047963  104058 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-204768-m02 --name multinode-204768-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-204768-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-204768-m02 --network multinode-204768 --ip 192.168.58.3 --volume multinode-204768-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1026 01:13:45.345725  104058 cli_runner.go:164] Run: docker container inspect multinode-204768-m02 --format={{.State.Running}}
	I1026 01:13:45.363286  104058 cli_runner.go:164] Run: docker container inspect multinode-204768-m02 --format={{.State.Status}}
	I1026 01:13:45.382099  104058 cli_runner.go:164] Run: docker exec multinode-204768-m02 stat /var/lib/dpkg/alternatives/iptables
	I1026 01:13:45.434236  104058 oci.go:144] the created container "multinode-204768-m02" has a running status.
	I1026 01:13:45.434268  104058 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768-m02/id_rsa...
	I1026 01:13:45.709182  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1026 01:13:45.709240  104058 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 01:13:45.739362  104058 cli_runner.go:164] Run: docker container inspect multinode-204768-m02 --format={{.State.Status}}
	I1026 01:13:45.755694  104058 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 01:13:45.755721  104058 kic_runner.go:114] Args: [docker exec --privileged multinode-204768-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 01:13:45.819982  104058 cli_runner.go:164] Run: docker container inspect multinode-204768-m02 --format={{.State.Status}}
	I1026 01:13:45.841292  104058 machine.go:88] provisioning docker machine ...
	I1026 01:13:45.841335  104058 ubuntu.go:169] provisioning hostname "multinode-204768-m02"
	I1026 01:13:45.841419  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768-m02
	I1026 01:13:45.860320  104058 main.go:141] libmachine: Using SSH client type: native
	I1026 01:13:45.860787  104058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I1026 01:13:45.860805  104058 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-204768-m02 && echo "multinode-204768-m02" | sudo tee /etc/hostname
	I1026 01:13:46.009043  104058 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-204768-m02
	
	I1026 01:13:46.009119  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768-m02
	I1026 01:13:46.025934  104058 main.go:141] libmachine: Using SSH client type: native
	I1026 01:13:46.026285  104058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I1026 01:13:46.026312  104058 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-204768-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-204768-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-204768-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:13:46.149825  104058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:13:46.149853  104058 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17491-8444/.minikube CaCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17491-8444/.minikube}
	I1026 01:13:46.149870  104058 ubuntu.go:177] setting up certificates
	I1026 01:13:46.149880  104058 provision.go:83] configureAuth start
	I1026 01:13:46.149929  104058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-204768-m02
	I1026 01:13:46.166255  104058 provision.go:138] copyHostCerts
	I1026 01:13:46.166290  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem
	I1026 01:13:46.166316  104058 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem, removing ...
	I1026 01:13:46.166322  104058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem
	I1026 01:13:46.166385  104058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem (1078 bytes)
	I1026 01:13:46.166457  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem
	I1026 01:13:46.166474  104058 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem, removing ...
	I1026 01:13:46.166478  104058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem
	I1026 01:13:46.166501  104058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem (1123 bytes)
	I1026 01:13:46.166540  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem
	I1026 01:13:46.166556  104058 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem, removing ...
	I1026 01:13:46.166562  104058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem
	I1026 01:13:46.166581  104058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem (1675 bytes)
	I1026 01:13:46.166625  104058 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem org=jenkins.multinode-204768-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-204768-m02]
	I1026 01:13:46.443203  104058 provision.go:172] copyRemoteCerts
	I1026 01:13:46.443275  104058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:13:46.443306  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768-m02
	I1026 01:13:46.460695  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768-m02/id_rsa Username:docker}
	I1026 01:13:46.554009  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:13:46.554073  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1026 01:13:46.576367  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:13:46.576437  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:13:46.597777  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:13:46.597845  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 01:13:46.619183  104058 provision.go:86] duration metric: configureAuth took 469.286608ms
	I1026 01:13:46.619215  104058 ubuntu.go:193] setting minikube options for container-runtime
	I1026 01:13:46.619398  104058 config.go:182] Loaded profile config "multinode-204768": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:13:46.619488  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768-m02
	I1026 01:13:46.635814  104058 main.go:141] libmachine: Using SSH client type: native
	I1026 01:13:46.636150  104058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I1026 01:13:46.636168  104058 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:13:46.839570  104058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:13:46.839594  104058 machine.go:91] provisioned docker machine in 998.277026ms
	I1026 01:13:46.839605  104058 client.go:171] LocalClient.Create took 7.504374475s
	I1026 01:13:46.839627  104058 start.go:167] duration metric: libmachine.API.Create for "multinode-204768" took 7.504420834s
	I1026 01:13:46.839636  104058 start.go:300] post-start starting for "multinode-204768-m02" (driver="docker")
	I1026 01:13:46.839648  104058 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:13:46.839715  104058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:13:46.839762  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768-m02
	I1026 01:13:46.856542  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768-m02/id_rsa Username:docker}
	I1026 01:13:46.946541  104058 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:13:46.949602  104058 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1026 01:13:46.949624  104058 command_runner.go:130] > NAME="Ubuntu"
	I1026 01:13:46.949629  104058 command_runner.go:130] > VERSION_ID="22.04"
	I1026 01:13:46.949635  104058 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1026 01:13:46.949647  104058 command_runner.go:130] > VERSION_CODENAME=jammy
	I1026 01:13:46.949652  104058 command_runner.go:130] > ID=ubuntu
	I1026 01:13:46.949656  104058 command_runner.go:130] > ID_LIKE=debian
	I1026 01:13:46.949661  104058 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1026 01:13:46.949689  104058 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1026 01:13:46.949705  104058 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1026 01:13:46.949720  104058 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1026 01:13:46.949735  104058 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1026 01:13:46.949789  104058 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 01:13:46.949828  104058 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1026 01:13:46.949839  104058 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1026 01:13:46.949848  104058 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1026 01:13:46.949863  104058 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/addons for local assets ...
	I1026 01:13:46.949925  104058 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/files for local assets ...
	I1026 01:13:46.950014  104058 filesync.go:149] local asset: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem -> 152462.pem in /etc/ssl/certs
	I1026 01:13:46.950054  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem -> /etc/ssl/certs/152462.pem
	I1026 01:13:46.950168  104058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:13:46.957818  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem --> /etc/ssl/certs/152462.pem (1708 bytes)
	I1026 01:13:46.979454  104058 start.go:303] post-start completed in 139.803213ms
	I1026 01:13:46.979775  104058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-204768-m02
	I1026 01:13:46.995921  104058 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/config.json ...
	I1026 01:13:46.996166  104058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:13:46.996206  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768-m02
	I1026 01:13:47.012912  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768-m02/id_rsa Username:docker}
	I1026 01:13:47.098153  104058 command_runner.go:130] > 20%!
	(MISSING)I1026 01:13:47.098359  104058 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 01:13:47.102317  104058 command_runner.go:130] > 234G
	I1026 01:13:47.102555  104058 start.go:128] duration metric: createHost completed in 7.769725702s
	I1026 01:13:47.102579  104058 start.go:83] releasing machines lock for "multinode-204768-m02", held for 7.76986373s
	I1026 01:13:47.102651  104058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-204768-m02
	I1026 01:13:47.121833  104058 out.go:177] * Found network options:
	I1026 01:13:47.123784  104058 out.go:177]   - NO_PROXY=192.168.58.2
	W1026 01:13:47.125210  104058 proxy.go:119] fail to check proxy env: Error ip not in block
	W1026 01:13:47.125259  104058 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:13:47.125340  104058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:13:47.125388  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768-m02
	I1026 01:13:47.125470  104058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:13:47.125547  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768-m02
	I1026 01:13:47.144947  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768-m02/id_rsa Username:docker}
	I1026 01:13:47.146022  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768-m02/id_rsa Username:docker}
	I1026 01:13:47.316078  104058 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1026 01:13:47.366920  104058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 01:13:47.370895  104058 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1026 01:13:47.370931  104058 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1026 01:13:47.370942  104058 command_runner.go:130] > Device: b0h/176d	Inode: 800898      Links: 1
	I1026 01:13:47.370951  104058 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1026 01:13:47.370960  104058 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1026 01:13:47.370973  104058 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1026 01:13:47.370982  104058 command_runner.go:130] > Change: 2023-10-26 00:53:54.215199380 +0000
	I1026 01:13:47.370990  104058 command_runner.go:130] >  Birth: 2023-10-26 00:53:54.215199380 +0000
	I1026 01:13:47.371097  104058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:13:47.388646  104058 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1026 01:13:47.388748  104058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:13:47.415773  104058 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1026 01:13:47.415809  104058 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1026 01:13:47.415817  104058 start.go:472] detecting cgroup driver to use...
	I1026 01:13:47.415870  104058 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1026 01:13:47.415923  104058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:13:47.429508  104058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:13:47.439465  104058 docker.go:198] disabling cri-docker service (if available) ...
	I1026 01:13:47.439513  104058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:13:47.452098  104058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:13:47.465920  104058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:13:47.544186  104058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:13:47.557253  104058 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1026 01:13:47.620129  104058 docker.go:214] disabling docker service ...
	I1026 01:13:47.620199  104058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:13:47.637224  104058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:13:47.647568  104058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:13:47.731729  104058 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1026 01:13:47.731800  104058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:13:47.819998  104058 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1026 01:13:47.820063  104058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:13:47.831251  104058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:13:47.844919  104058 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1026 01:13:47.845607  104058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 01:13:47.845656  104058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:13:47.854285  104058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:13:47.854344  104058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:13:47.863206  104058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:13:47.871903  104058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:13:47.880791  104058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:13:47.889159  104058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:13:47.896048  104058 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1026 01:13:47.896667  104058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:13:47.904160  104058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:13:47.979149  104058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:13:48.074787  104058 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:13:48.074851  104058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:13:48.078260  104058 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1026 01:13:48.078286  104058 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1026 01:13:48.078297  104058 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1026 01:13:48.078309  104058 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1026 01:13:48.078318  104058 command_runner.go:130] > Access: 2023-10-26 01:13:48.057756454 +0000
	I1026 01:13:48.078330  104058 command_runner.go:130] > Modify: 2023-10-26 01:13:48.057756454 +0000
	I1026 01:13:48.078342  104058 command_runner.go:130] > Change: 2023-10-26 01:13:48.057756454 +0000
	I1026 01:13:48.078349  104058 command_runner.go:130] >  Birth: -
	I1026 01:13:48.078383  104058 start.go:540] Will wait 60s for crictl version
	I1026 01:13:48.078419  104058 ssh_runner.go:195] Run: which crictl
	I1026 01:13:48.081527  104058 command_runner.go:130] > /usr/bin/crictl
	I1026 01:13:48.081602  104058 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:13:48.112651  104058 command_runner.go:130] > Version:  0.1.0
	I1026 01:13:48.112676  104058 command_runner.go:130] > RuntimeName:  cri-o
	I1026 01:13:48.112684  104058 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1026 01:13:48.112693  104058 command_runner.go:130] > RuntimeApiVersion:  v1
	I1026 01:13:48.114760  104058 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1026 01:13:48.114841  104058 ssh_runner.go:195] Run: crio --version
	I1026 01:13:48.145725  104058 command_runner.go:130] > crio version 1.24.6
	I1026 01:13:48.145750  104058 command_runner.go:130] > Version:          1.24.6
	I1026 01:13:48.145758  104058 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1026 01:13:48.145763  104058 command_runner.go:130] > GitTreeState:     clean
	I1026 01:13:48.145769  104058 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1026 01:13:48.145774  104058 command_runner.go:130] > GoVersion:        go1.18.2
	I1026 01:13:48.145778  104058 command_runner.go:130] > Compiler:         gc
	I1026 01:13:48.145782  104058 command_runner.go:130] > Platform:         linux/amd64
	I1026 01:13:48.145788  104058 command_runner.go:130] > Linkmode:         dynamic
	I1026 01:13:48.145800  104058 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1026 01:13:48.145806  104058 command_runner.go:130] > SeccompEnabled:   true
	I1026 01:13:48.145812  104058 command_runner.go:130] > AppArmorEnabled:  false
	I1026 01:13:48.147063  104058 ssh_runner.go:195] Run: crio --version
	I1026 01:13:48.178053  104058 command_runner.go:130] > crio version 1.24.6
	I1026 01:13:48.178074  104058 command_runner.go:130] > Version:          1.24.6
	I1026 01:13:48.178084  104058 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1026 01:13:48.178094  104058 command_runner.go:130] > GitTreeState:     clean
	I1026 01:13:48.178103  104058 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1026 01:13:48.178110  104058 command_runner.go:130] > GoVersion:        go1.18.2
	I1026 01:13:48.178116  104058 command_runner.go:130] > Compiler:         gc
	I1026 01:13:48.178125  104058 command_runner.go:130] > Platform:         linux/amd64
	I1026 01:13:48.178139  104058 command_runner.go:130] > Linkmode:         dynamic
	I1026 01:13:48.178156  104058 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1026 01:13:48.178167  104058 command_runner.go:130] > SeccompEnabled:   true
	I1026 01:13:48.178176  104058 command_runner.go:130] > AppArmorEnabled:  false
	I1026 01:13:48.181467  104058 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1026 01:13:48.183070  104058 out.go:177]   - env NO_PROXY=192.168.58.2
	I1026 01:13:48.184652  104058 cli_runner.go:164] Run: docker network inspect multinode-204768 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 01:13:48.201396  104058 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1026 01:13:48.204722  104058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:13:48.214455  104058 certs.go:56] Setting up /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768 for IP: 192.168.58.3
	I1026 01:13:48.214490  104058 certs.go:190] acquiring lock for shared ca certs: {Name:mk5c45c423cc5a6761a0ccf5b25a0c8b531fe271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:13:48.214647  104058 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key
	I1026 01:13:48.214700  104058 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key
	I1026 01:13:48.214717  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:13:48.214736  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:13:48.214754  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:13:48.214774  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:13:48.214838  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246.pem (1338 bytes)
	W1026 01:13:48.214876  104058 certs.go:433] ignoring /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246_empty.pem, impossibly tiny 0 bytes
	I1026 01:13:48.214894  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 01:13:48.214929  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem (1078 bytes)
	I1026 01:13:48.214963  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:13:48.214996  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem (1675 bytes)
	I1026 01:13:48.215052  104058 certs.go:437] found cert: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem (1708 bytes)
	I1026 01:13:48.215091  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem -> /usr/share/ca-certificates/152462.pem
	I1026 01:13:48.215112  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:13:48.215130  104058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246.pem -> /usr/share/ca-certificates/15246.pem
	I1026 01:13:48.215492  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:13:48.235907  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:13:48.255970  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:13:48.276281  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1026 01:13:48.296840  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem --> /usr/share/ca-certificates/152462.pem (1708 bytes)
	I1026 01:13:48.318324  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:13:48.341480  104058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/certs/15246.pem --> /usr/share/ca-certificates/15246.pem (1338 bytes)
	I1026 01:13:48.362720  104058 ssh_runner.go:195] Run: openssl version
	I1026 01:13:48.367466  104058 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1026 01:13:48.367667  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:13:48.376156  104058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:13:48.379405  104058 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 00:54 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:13:48.379453  104058 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:54 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:13:48.379496  104058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:13:48.385703  104058 command_runner.go:130] > b5213941
	I1026 01:13:48.385874  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:13:48.394766  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15246.pem && ln -fs /usr/share/ca-certificates/15246.pem /etc/ssl/certs/15246.pem"
	I1026 01:13:48.403706  104058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15246.pem
	I1026 01:13:48.406883  104058 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 01:00 /usr/share/ca-certificates/15246.pem
	I1026 01:13:48.406942  104058 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 01:00 /usr/share/ca-certificates/15246.pem
	I1026 01:13:48.406986  104058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15246.pem
	I1026 01:13:48.413032  104058 command_runner.go:130] > 51391683
	I1026 01:13:48.413225  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15246.pem /etc/ssl/certs/51391683.0"
	I1026 01:13:48.421539  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152462.pem && ln -fs /usr/share/ca-certificates/152462.pem /etc/ssl/certs/152462.pem"
	I1026 01:13:48.430021  104058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152462.pem
	I1026 01:13:48.433124  104058 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 01:00 /usr/share/ca-certificates/152462.pem
	I1026 01:13:48.433154  104058 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 01:00 /usr/share/ca-certificates/152462.pem
	I1026 01:13:48.433196  104058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152462.pem
	I1026 01:13:48.439129  104058 command_runner.go:130] > 3ec20f2e
	I1026 01:13:48.439358  104058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152462.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:13:48.447879  104058 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1026 01:13:48.450919  104058 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1026 01:13:48.450981  104058 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1026 01:13:48.451066  104058 ssh_runner.go:195] Run: crio config
	I1026 01:13:48.488608  104058 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1026 01:13:48.488632  104058 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1026 01:13:48.488643  104058 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1026 01:13:48.488655  104058 command_runner.go:130] > #
	I1026 01:13:48.488666  104058 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1026 01:13:48.488677  104058 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1026 01:13:48.488687  104058 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1026 01:13:48.488697  104058 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1026 01:13:48.488709  104058 command_runner.go:130] > # reload'.
	I1026 01:13:48.488719  104058 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1026 01:13:48.488727  104058 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1026 01:13:48.488738  104058 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1026 01:13:48.488748  104058 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1026 01:13:48.488758  104058 command_runner.go:130] > [crio]
	I1026 01:13:48.488770  104058 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1026 01:13:48.488783  104058 command_runner.go:130] > # containers images, in this directory.
	I1026 01:13:48.488796  104058 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1026 01:13:48.488812  104058 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1026 01:13:48.488825  104058 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1026 01:13:48.488837  104058 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1026 01:13:48.488851  104058 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1026 01:13:48.488862  104058 command_runner.go:130] > # storage_driver = "vfs"
	I1026 01:13:48.488872  104058 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1026 01:13:48.488882  104058 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1026 01:13:48.488888  104058 command_runner.go:130] > # storage_option = [
	I1026 01:13:48.488892  104058 command_runner.go:130] > # ]
	I1026 01:13:48.488898  104058 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1026 01:13:48.488904  104058 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1026 01:13:48.488909  104058 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1026 01:13:48.488914  104058 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1026 01:13:48.488920  104058 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1026 01:13:48.488924  104058 command_runner.go:130] > # always happen on a node reboot
	I1026 01:13:48.488931  104058 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1026 01:13:48.488941  104058 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1026 01:13:48.488951  104058 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1026 01:13:48.488964  104058 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1026 01:13:48.488973  104058 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1026 01:13:48.488985  104058 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1026 01:13:48.488998  104058 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1026 01:13:48.489004  104058 command_runner.go:130] > # internal_wipe = true
	I1026 01:13:48.489010  104058 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1026 01:13:48.489018  104058 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1026 01:13:48.489027  104058 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1026 01:13:48.489037  104058 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1026 01:13:48.489048  104058 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1026 01:13:48.489054  104058 command_runner.go:130] > [crio.api]
	I1026 01:13:48.489063  104058 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1026 01:13:48.489071  104058 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1026 01:13:48.489080  104058 command_runner.go:130] > # IP address on which the stream server will listen.
	I1026 01:13:48.489087  104058 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1026 01:13:48.489098  104058 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1026 01:13:48.489107  104058 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1026 01:13:48.489114  104058 command_runner.go:130] > # stream_port = "0"
	I1026 01:13:48.489124  104058 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1026 01:13:48.489131  104058 command_runner.go:130] > # stream_enable_tls = false
	I1026 01:13:48.489141  104058 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1026 01:13:48.489149  104058 command_runner.go:130] > # stream_idle_timeout = ""
	I1026 01:13:48.489160  104058 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1026 01:13:48.489176  104058 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1026 01:13:48.489182  104058 command_runner.go:130] > # minutes.
	I1026 01:13:48.489188  104058 command_runner.go:130] > # stream_tls_cert = ""
	I1026 01:13:48.489197  104058 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1026 01:13:48.489207  104058 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1026 01:13:48.489214  104058 command_runner.go:130] > # stream_tls_key = ""
	I1026 01:13:48.489224  104058 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1026 01:13:48.489235  104058 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1026 01:13:48.489244  104058 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1026 01:13:48.489251  104058 command_runner.go:130] > # stream_tls_ca = ""
	I1026 01:13:48.489263  104058 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1026 01:13:48.489271  104058 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1026 01:13:48.489280  104058 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1026 01:13:48.489366  104058 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1026 01:13:48.489403  104058 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1026 01:13:48.489421  104058 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1026 01:13:48.489432  104058 command_runner.go:130] > [crio.runtime]
	I1026 01:13:48.489445  104058 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1026 01:13:48.489455  104058 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1026 01:13:48.489464  104058 command_runner.go:130] > # "nofile=1024:2048"
	I1026 01:13:48.489482  104058 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1026 01:13:48.489492  104058 command_runner.go:130] > # default_ulimits = [
	I1026 01:13:48.489499  104058 command_runner.go:130] > # ]
	I1026 01:13:48.489513  104058 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1026 01:13:48.489524  104058 command_runner.go:130] > # no_pivot = false
	I1026 01:13:48.489538  104058 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1026 01:13:48.489550  104058 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1026 01:13:48.489562  104058 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1026 01:13:48.489573  104058 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1026 01:13:48.489585  104058 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1026 01:13:48.489601  104058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1026 01:13:48.489612  104058 command_runner.go:130] > # conmon = ""
	I1026 01:13:48.489620  104058 command_runner.go:130] > # Cgroup setting for conmon
	I1026 01:13:48.489637  104058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1026 01:13:48.489648  104058 command_runner.go:130] > conmon_cgroup = "pod"
	I1026 01:13:48.489659  104058 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1026 01:13:48.489684  104058 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1026 01:13:48.489694  104058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1026 01:13:48.489699  104058 command_runner.go:130] > # conmon_env = [
	I1026 01:13:48.489704  104058 command_runner.go:130] > # ]
	I1026 01:13:48.489710  104058 command_runner.go:130] > # Additional environment variables to set for all the
	I1026 01:13:48.489717  104058 command_runner.go:130] > # containers. These are overridden if set in the
	I1026 01:13:48.489725  104058 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1026 01:13:48.489730  104058 command_runner.go:130] > # default_env = [
	I1026 01:13:48.489735  104058 command_runner.go:130] > # ]
	I1026 01:13:48.489742  104058 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1026 01:13:48.489747  104058 command_runner.go:130] > # selinux = false
	I1026 01:13:48.489755  104058 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1026 01:13:48.489762  104058 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1026 01:13:48.489770  104058 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1026 01:13:48.489775  104058 command_runner.go:130] > # seccomp_profile = ""
	I1026 01:13:48.489782  104058 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1026 01:13:48.489791  104058 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1026 01:13:48.489801  104058 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1026 01:13:48.489809  104058 command_runner.go:130] > # which might increase security.
	I1026 01:13:48.489818  104058 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1026 01:13:48.489831  104058 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1026 01:13:48.489840  104058 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1026 01:13:48.489848  104058 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1026 01:13:48.489860  104058 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1026 01:13:48.489869  104058 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:13:48.489875  104058 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1026 01:13:48.489885  104058 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1026 01:13:48.489894  104058 command_runner.go:130] > # the cgroup blockio controller.
	I1026 01:13:48.489902  104058 command_runner.go:130] > # blockio_config_file = ""
	I1026 01:13:48.489911  104058 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1026 01:13:48.489919  104058 command_runner.go:130] > # irqbalance daemon.
	I1026 01:13:48.489926  104058 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1026 01:13:48.489940  104058 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1026 01:13:48.489952  104058 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:13:48.489960  104058 command_runner.go:130] > # rdt_config_file = ""
	I1026 01:13:48.489969  104058 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1026 01:13:48.489974  104058 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1026 01:13:48.489982  104058 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1026 01:13:48.489992  104058 command_runner.go:130] > # separate_pull_cgroup = ""
	I1026 01:13:48.489998  104058 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1026 01:13:48.490007  104058 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1026 01:13:48.490011  104058 command_runner.go:130] > # will be added.
	I1026 01:13:48.490017  104058 command_runner.go:130] > # default_capabilities = [
	I1026 01:13:48.490021  104058 command_runner.go:130] > # 	"CHOWN",
	I1026 01:13:48.490028  104058 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1026 01:13:48.490031  104058 command_runner.go:130] > # 	"FSETID",
	I1026 01:13:48.490035  104058 command_runner.go:130] > # 	"FOWNER",
	I1026 01:13:48.490039  104058 command_runner.go:130] > # 	"SETGID",
	I1026 01:13:48.490045  104058 command_runner.go:130] > # 	"SETUID",
	I1026 01:13:48.490059  104058 command_runner.go:130] > # 	"SETPCAP",
	I1026 01:13:48.490068  104058 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1026 01:13:48.490072  104058 command_runner.go:130] > # 	"KILL",
	I1026 01:13:48.490076  104058 command_runner.go:130] > # ]
	I1026 01:13:48.490084  104058 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1026 01:13:48.490093  104058 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1026 01:13:48.490098  104058 command_runner.go:130] > # add_inheritable_capabilities = true
	I1026 01:13:48.490106  104058 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1026 01:13:48.490113  104058 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1026 01:13:48.490119  104058 command_runner.go:130] > # default_sysctls = [
	I1026 01:13:48.490123  104058 command_runner.go:130] > # ]
	I1026 01:13:48.490128  104058 command_runner.go:130] > # List of devices on the host that a
	I1026 01:13:48.490137  104058 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1026 01:13:48.490140  104058 command_runner.go:130] > # allowed_devices = [
	I1026 01:13:48.490147  104058 command_runner.go:130] > # 	"/dev/fuse",
	I1026 01:13:48.490151  104058 command_runner.go:130] > # ]
	I1026 01:13:48.490156  104058 command_runner.go:130] > # List of additional devices. specified as
	I1026 01:13:48.490204  104058 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1026 01:13:48.490217  104058 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1026 01:13:48.490226  104058 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1026 01:13:48.490233  104058 command_runner.go:130] > # additional_devices = [
	I1026 01:13:48.490238  104058 command_runner.go:130] > # ]
	I1026 01:13:48.490248  104058 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1026 01:13:48.490262  104058 command_runner.go:130] > # cdi_spec_dirs = [
	I1026 01:13:48.490269  104058 command_runner.go:130] > # 	"/etc/cdi",
	I1026 01:13:48.490277  104058 command_runner.go:130] > # 	"/var/run/cdi",
	I1026 01:13:48.490286  104058 command_runner.go:130] > # ]
	I1026 01:13:48.490300  104058 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1026 01:13:48.490314  104058 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1026 01:13:48.490324  104058 command_runner.go:130] > # Defaults to false.
	I1026 01:13:48.490337  104058 command_runner.go:130] > # device_ownership_from_security_context = false
	I1026 01:13:48.490352  104058 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1026 01:13:48.490366  104058 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1026 01:13:48.490376  104058 command_runner.go:130] > # hooks_dir = [
	I1026 01:13:48.490389  104058 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1026 01:13:48.490397  104058 command_runner.go:130] > # ]
	I1026 01:13:48.490409  104058 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1026 01:13:48.490423  104058 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1026 01:13:48.490436  104058 command_runner.go:130] > # its default mounts from the following two files:
	I1026 01:13:48.490446  104058 command_runner.go:130] > #
	I1026 01:13:48.490463  104058 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1026 01:13:48.490477  104058 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1026 01:13:48.490490  104058 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1026 01:13:48.490496  104058 command_runner.go:130] > #
	I1026 01:13:48.490511  104058 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1026 01:13:48.490525  104058 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1026 01:13:48.490540  104058 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1026 01:13:48.490552  104058 command_runner.go:130] > #      only add mounts it finds in this file.
	I1026 01:13:48.490562  104058 command_runner.go:130] > #
	I1026 01:13:48.490572  104058 command_runner.go:130] > # default_mounts_file = ""
	I1026 01:13:48.490585  104058 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1026 01:13:48.490597  104058 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1026 01:13:48.490608  104058 command_runner.go:130] > # pids_limit = 0
	I1026 01:13:48.490623  104058 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1026 01:13:48.490637  104058 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1026 01:13:48.490651  104058 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1026 01:13:48.490669  104058 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1026 01:13:48.490679  104058 command_runner.go:130] > # log_size_max = -1
	I1026 01:13:48.490695  104058 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1026 01:13:48.490706  104058 command_runner.go:130] > # log_to_journald = false
	I1026 01:13:48.490717  104058 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1026 01:13:48.490729  104058 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1026 01:13:48.490745  104058 command_runner.go:130] > # Path to directory for container attach sockets.
	I1026 01:13:48.490757  104058 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1026 01:13:48.490770  104058 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1026 01:13:48.490781  104058 command_runner.go:130] > # bind_mount_prefix = ""
	I1026 01:13:48.490795  104058 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1026 01:13:48.490805  104058 command_runner.go:130] > # read_only = false
	I1026 01:13:48.490819  104058 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1026 01:13:48.490834  104058 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1026 01:13:48.490845  104058 command_runner.go:130] > # live configuration reload.
	I1026 01:13:48.490856  104058 command_runner.go:130] > # log_level = "info"
	I1026 01:13:48.490869  104058 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1026 01:13:48.490882  104058 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:13:48.490893  104058 command_runner.go:130] > # log_filter = ""
	I1026 01:13:48.490904  104058 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1026 01:13:48.490920  104058 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1026 01:13:48.490931  104058 command_runner.go:130] > # separated by comma.
	I1026 01:13:48.490942  104058 command_runner.go:130] > # uid_mappings = ""
	I1026 01:13:48.490956  104058 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1026 01:13:48.490972  104058 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1026 01:13:48.490982  104058 command_runner.go:130] > # separated by comma.
	I1026 01:13:48.490999  104058 command_runner.go:130] > # gid_mappings = ""
	I1026 01:13:48.491013  104058 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1026 01:13:48.491027  104058 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1026 01:13:48.491041  104058 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1026 01:13:48.491052  104058 command_runner.go:130] > # minimum_mappable_uid = -1
	I1026 01:13:48.491066  104058 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1026 01:13:48.491081  104058 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1026 01:13:48.491095  104058 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1026 01:13:48.491107  104058 command_runner.go:130] > # minimum_mappable_gid = -1
	I1026 01:13:48.491118  104058 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1026 01:13:48.491132  104058 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1026 01:13:48.491145  104058 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1026 01:13:48.491157  104058 command_runner.go:130] > # ctr_stop_timeout = 30
	I1026 01:13:48.491170  104058 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1026 01:13:48.491208  104058 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1026 01:13:48.491220  104058 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1026 01:13:48.491228  104058 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1026 01:13:48.491237  104058 command_runner.go:130] > # drop_infra_ctr = true
	I1026 01:13:48.491251  104058 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1026 01:13:48.491264  104058 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1026 01:13:48.491280  104058 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1026 01:13:48.491291  104058 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1026 01:13:48.491306  104058 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1026 01:13:48.491318  104058 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1026 01:13:48.491330  104058 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1026 01:13:48.491346  104058 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1026 01:13:48.491355  104058 command_runner.go:130] > # pinns_path = ""
	I1026 01:13:48.491367  104058 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1026 01:13:48.491382  104058 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1026 01:13:48.491396  104058 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1026 01:13:48.491410  104058 command_runner.go:130] > # default_runtime = "runc"
	I1026 01:13:48.491423  104058 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1026 01:13:48.491439  104058 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1026 01:13:48.491457  104058 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1026 01:13:48.491469  104058 command_runner.go:130] > # creation as a file is not desired either.
	I1026 01:13:48.491487  104058 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1026 01:13:48.491500  104058 command_runner.go:130] > # the hostname is being managed dynamically.
	I1026 01:13:48.491512  104058 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1026 01:13:48.491521  104058 command_runner.go:130] > # ]
	I1026 01:13:48.491535  104058 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1026 01:13:48.491549  104058 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1026 01:13:48.491564  104058 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1026 01:13:48.491578  104058 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1026 01:13:48.491587  104058 command_runner.go:130] > #
	I1026 01:13:48.491597  104058 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1026 01:13:48.491609  104058 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1026 01:13:48.491620  104058 command_runner.go:130] > #  runtime_type = "oci"
	I1026 01:13:48.491632  104058 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1026 01:13:48.491644  104058 command_runner.go:130] > #  privileged_without_host_devices = false
	I1026 01:13:48.491656  104058 command_runner.go:130] > #  allowed_annotations = []
	I1026 01:13:48.491665  104058 command_runner.go:130] > # Where:
	I1026 01:13:48.491674  104058 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1026 01:13:48.491689  104058 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1026 01:13:48.491704  104058 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1026 01:13:48.491718  104058 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1026 01:13:48.491727  104058 command_runner.go:130] > #   in $PATH.
	I1026 01:13:48.491739  104058 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1026 01:13:48.491751  104058 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1026 01:13:48.491766  104058 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1026 01:13:48.491776  104058 command_runner.go:130] > #   state.
	I1026 01:13:48.491790  104058 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1026 01:13:48.491804  104058 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1026 01:13:48.491819  104058 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1026 01:13:48.491832  104058 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1026 01:13:48.491846  104058 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1026 01:13:48.491862  104058 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1026 01:13:48.491876  104058 command_runner.go:130] > #   The currently recognized values are:
	I1026 01:13:48.491890  104058 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1026 01:13:48.491907  104058 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1026 01:13:48.491920  104058 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1026 01:13:48.491932  104058 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1026 01:13:48.491948  104058 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1026 01:13:48.491962  104058 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1026 01:13:48.491976  104058 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1026 01:13:48.491996  104058 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1026 01:13:48.492008  104058 command_runner.go:130] > #   should be moved to the container's cgroup
	I1026 01:13:48.492016  104058 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1026 01:13:48.492029  104058 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1026 01:13:48.492040  104058 command_runner.go:130] > runtime_type = "oci"
	I1026 01:13:48.492048  104058 command_runner.go:130] > runtime_root = "/run/runc"
	I1026 01:13:48.492060  104058 command_runner.go:130] > runtime_config_path = ""
	I1026 01:13:48.492072  104058 command_runner.go:130] > monitor_path = ""
	I1026 01:13:48.492083  104058 command_runner.go:130] > monitor_cgroup = ""
	I1026 01:13:48.492094  104058 command_runner.go:130] > monitor_exec_cgroup = ""
	I1026 01:13:48.492129  104058 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1026 01:13:48.492139  104058 command_runner.go:130] > # running containers
	I1026 01:13:48.492147  104058 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1026 01:13:48.492158  104058 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1026 01:13:48.492173  104058 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1026 01:13:48.492187  104058 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1026 01:13:48.492199  104058 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1026 01:13:48.492210  104058 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1026 01:13:48.492222  104058 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1026 01:13:48.492234  104058 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1026 01:13:48.492247  104058 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1026 01:13:48.492259  104058 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1026 01:13:48.492273  104058 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1026 01:13:48.492284  104058 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1026 01:13:48.492298  104058 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1026 01:13:48.492315  104058 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1026 01:13:48.492332  104058 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1026 01:13:48.492345  104058 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1026 01:13:48.492366  104058 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1026 01:13:48.492383  104058 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1026 01:13:48.492397  104058 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1026 01:13:48.492413  104058 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1026 01:13:48.492423  104058 command_runner.go:130] > # Example:
	I1026 01:13:48.492434  104058 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1026 01:13:48.492445  104058 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1026 01:13:48.492455  104058 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1026 01:13:48.492468  104058 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1026 01:13:48.492478  104058 command_runner.go:130] > # cpuset = 0
	I1026 01:13:48.492488  104058 command_runner.go:130] > # cpushares = "0-1"
	I1026 01:13:48.492495  104058 command_runner.go:130] > # Where:
	I1026 01:13:48.492507  104058 command_runner.go:130] > # The workload name is workload-type.
	I1026 01:13:48.492522  104058 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1026 01:13:48.492535  104058 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1026 01:13:48.492546  104058 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1026 01:13:48.492563  104058 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1026 01:13:48.492577  104058 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1026 01:13:48.492586  104058 command_runner.go:130] > # 
	I1026 01:13:48.492600  104058 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1026 01:13:48.492608  104058 command_runner.go:130] > #
	I1026 01:13:48.492620  104058 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1026 01:13:48.492634  104058 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1026 01:13:48.492648  104058 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1026 01:13:48.492663  104058 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1026 01:13:48.492677  104058 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1026 01:13:48.492687  104058 command_runner.go:130] > [crio.image]
	I1026 01:13:48.492701  104058 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1026 01:13:48.492710  104058 command_runner.go:130] > # default_transport = "docker://"
	I1026 01:13:48.492725  104058 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1026 01:13:48.492739  104058 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1026 01:13:48.492750  104058 command_runner.go:130] > # global_auth_file = ""
	I1026 01:13:48.492763  104058 command_runner.go:130] > # The image used to instantiate infra containers.
	I1026 01:13:48.492775  104058 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:13:48.492787  104058 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1026 01:13:48.492799  104058 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1026 01:13:48.492813  104058 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1026 01:13:48.492826  104058 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:13:48.492837  104058 command_runner.go:130] > # pause_image_auth_file = ""
	I1026 01:13:48.492851  104058 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1026 01:13:48.492865  104058 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1026 01:13:48.492879  104058 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1026 01:13:48.492893  104058 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1026 01:13:48.492905  104058 command_runner.go:130] > # pause_command = "/pause"
	I1026 01:13:48.492919  104058 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1026 01:13:48.492934  104058 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1026 01:13:48.492949  104058 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1026 01:13:48.492963  104058 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1026 01:13:48.492977  104058 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1026 01:13:48.492992  104058 command_runner.go:130] > # signature_policy = ""
	I1026 01:13:48.493013  104058 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1026 01:13:48.493028  104058 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1026 01:13:48.493038  104058 command_runner.go:130] > # changing them here.
	I1026 01:13:48.493046  104058 command_runner.go:130] > # insecure_registries = [
	I1026 01:13:48.493053  104058 command_runner.go:130] > # ]
	I1026 01:13:48.493067  104058 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1026 01:13:48.493080  104058 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1026 01:13:48.493091  104058 command_runner.go:130] > # image_volumes = "mkdir"
	I1026 01:13:48.493104  104058 command_runner.go:130] > # Temporary directory to use for storing big files
	I1026 01:13:48.493115  104058 command_runner.go:130] > # big_files_temporary_dir = ""
	I1026 01:13:48.493130  104058 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1026 01:13:48.493140  104058 command_runner.go:130] > # CNI plugins.
	I1026 01:13:48.493151  104058 command_runner.go:130] > [crio.network]
	I1026 01:13:48.493163  104058 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1026 01:13:48.493176  104058 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1026 01:13:48.493188  104058 command_runner.go:130] > # cni_default_network = ""
	I1026 01:13:48.493201  104058 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1026 01:13:48.493212  104058 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1026 01:13:48.493222  104058 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1026 01:13:48.493232  104058 command_runner.go:130] > # plugin_dirs = [
	I1026 01:13:48.493241  104058 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1026 01:13:48.493250  104058 command_runner.go:130] > # ]
	I1026 01:13:48.493263  104058 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1026 01:13:48.493273  104058 command_runner.go:130] > [crio.metrics]
	I1026 01:13:48.493286  104058 command_runner.go:130] > # Globally enable or disable metrics support.
	I1026 01:13:48.493295  104058 command_runner.go:130] > # enable_metrics = false
	I1026 01:13:48.493304  104058 command_runner.go:130] > # Specify enabled metrics collectors.
	I1026 01:13:48.493316  104058 command_runner.go:130] > # Per default all metrics are enabled.
	I1026 01:13:48.493330  104058 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1026 01:13:48.493345  104058 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1026 01:13:48.493358  104058 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1026 01:13:48.493370  104058 command_runner.go:130] > # metrics_collectors = [
	I1026 01:13:48.493380  104058 command_runner.go:130] > # 	"operations",
	I1026 01:13:48.493389  104058 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1026 01:13:48.493401  104058 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1026 01:13:48.493412  104058 command_runner.go:130] > # 	"operations_errors",
	I1026 01:13:48.493423  104058 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1026 01:13:48.493435  104058 command_runner.go:130] > # 	"image_pulls_by_name",
	I1026 01:13:48.493446  104058 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1026 01:13:48.493457  104058 command_runner.go:130] > # 	"image_pulls_failures",
	I1026 01:13:48.493467  104058 command_runner.go:130] > # 	"image_pulls_successes",
	I1026 01:13:48.493474  104058 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1026 01:13:48.493482  104058 command_runner.go:130] > # 	"image_layer_reuse",
	I1026 01:13:48.493492  104058 command_runner.go:130] > # 	"containers_oom_total",
	I1026 01:13:48.493500  104058 command_runner.go:130] > # 	"containers_oom",
	I1026 01:13:48.493510  104058 command_runner.go:130] > # 	"processes_defunct",
	I1026 01:13:48.493519  104058 command_runner.go:130] > # 	"operations_total",
	I1026 01:13:48.493530  104058 command_runner.go:130] > # 	"operations_latency_seconds",
	I1026 01:13:48.493542  104058 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1026 01:13:48.493553  104058 command_runner.go:130] > # 	"operations_errors_total",
	I1026 01:13:48.493563  104058 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1026 01:13:48.493572  104058 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1026 01:13:48.493594  104058 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1026 01:13:48.493607  104058 command_runner.go:130] > # 	"image_pulls_success_total",
	I1026 01:13:48.493615  104058 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1026 01:13:48.493629  104058 command_runner.go:130] > # 	"containers_oom_count_total",
	I1026 01:13:48.493638  104058 command_runner.go:130] > # ]
	I1026 01:13:48.493649  104058 command_runner.go:130] > # The port on which the metrics server will listen.
	I1026 01:13:48.493661  104058 command_runner.go:130] > # metrics_port = 9090
	I1026 01:13:48.493694  104058 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1026 01:13:48.493706  104058 command_runner.go:130] > # metrics_socket = ""
	I1026 01:13:48.493718  104058 command_runner.go:130] > # The certificate for the secure metrics server.
	I1026 01:13:48.493732  104058 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1026 01:13:48.493747  104058 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1026 01:13:48.493758  104058 command_runner.go:130] > # certificate on any modification event.
	I1026 01:13:48.493765  104058 command_runner.go:130] > # metrics_cert = ""
	I1026 01:13:48.493775  104058 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1026 01:13:48.493787  104058 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1026 01:13:48.493798  104058 command_runner.go:130] > # metrics_key = ""
	I1026 01:13:48.493813  104058 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1026 01:13:48.493823  104058 command_runner.go:130] > [crio.tracing]
	I1026 01:13:48.493834  104058 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1026 01:13:48.493846  104058 command_runner.go:130] > # enable_tracing = false
	I1026 01:13:48.493859  104058 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1026 01:13:48.493871  104058 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1026 01:13:48.493884  104058 command_runner.go:130] > # Number of samples to collect per million spans.
	I1026 01:13:48.493896  104058 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1026 01:13:48.493910  104058 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1026 01:13:48.493921  104058 command_runner.go:130] > [crio.stats]
	I1026 01:13:48.493933  104058 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1026 01:13:48.493946  104058 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1026 01:13:48.493956  104058 command_runner.go:130] > # stats_collection_period = 0
	I1026 01:13:48.494010  104058 command_runner.go:130] ! time="2023-10-26 01:13:48.485776904Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1026 01:13:48.494032  104058 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1026 01:13:48.494114  104058 cni.go:84] Creating CNI manager for ""
	I1026 01:13:48.494124  104058 cni.go:136] 2 nodes found, recommending kindnet
	I1026 01:13:48.494135  104058 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1026 01:13:48.494161  104058 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-204768 NodeName:multinode-204768-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:13:48.494306  104058 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-204768-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:13:48.494379  104058 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-204768-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-204768 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1026 01:13:48.494441  104058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1026 01:13:48.502417  104058 command_runner.go:130] > kubeadm
	I1026 01:13:48.502445  104058 command_runner.go:130] > kubectl
	I1026 01:13:48.502452  104058 command_runner.go:130] > kubelet
	I1026 01:13:48.502474  104058 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:13:48.502527  104058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1026 01:13:48.510011  104058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1026 01:13:48.525496  104058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:13:48.541448  104058 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1026 01:13:48.544497  104058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:13:48.554209  104058 host.go:66] Checking if "multinode-204768" exists ...
	I1026 01:13:48.554439  104058 config.go:182] Loaded profile config "multinode-204768": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:13:48.554430  104058 start.go:304] JoinCluster: &{Name:multinode-204768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-204768 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 01:13:48.554508  104058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1026 01:13:48.554543  104058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:13:48.571165  104058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa Username:docker}
	I1026 01:13:48.713110  104058 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ibflyh.02o865wvoxtkimzh --discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa 
	I1026 01:13:48.713169  104058 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1026 01:13:48.713210  104058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ibflyh.02o865wvoxtkimzh --discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-204768-m02"
	I1026 01:13:48.748353  104058 command_runner.go:130] > [preflight] Running pre-flight checks
	I1026 01:13:48.775997  104058 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1026 01:13:48.776021  104058 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-gcp
	I1026 01:13:48.776028  104058 command_runner.go:130] > OS: Linux
	I1026 01:13:48.776034  104058 command_runner.go:130] > CGROUPS_CPU: enabled
	I1026 01:13:48.776041  104058 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1026 01:13:48.776049  104058 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1026 01:13:48.776056  104058 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1026 01:13:48.776068  104058 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1026 01:13:48.776081  104058 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1026 01:13:48.776096  104058 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1026 01:13:48.776108  104058 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1026 01:13:48.776117  104058 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1026 01:13:48.854911  104058 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1026 01:13:48.854940  104058 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1026 01:13:48.878708  104058 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:13:48.878732  104058 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:13:48.878738  104058 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1026 01:13:48.959871  104058 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1026 01:13:51.473809  104058 command_runner.go:130] > This node has joined the cluster:
	I1026 01:13:51.473838  104058 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1026 01:13:51.473849  104058 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1026 01:13:51.473860  104058 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1026 01:13:51.476652  104058 command_runner.go:130] ! W1026 01:13:48.747892    1114 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1026 01:13:51.476679  104058 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1026 01:13:51.476689  104058 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:13:51.476708  104058 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ibflyh.02o865wvoxtkimzh --discovery-token-ca-cert-hash sha256:fcb226ee6da23e7f860dc1a15447b5e2bdaebad51636d54784ba9f6eb94cd3aa --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-204768-m02": (2.763483165s)
	I1026 01:13:51.476722  104058 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1026 01:13:51.638457  104058 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1026 01:13:51.638495  104058 start.go:306] JoinCluster complete in 3.084064253s
	I1026 01:13:51.638509  104058 cni.go:84] Creating CNI manager for ""
	I1026 01:13:51.638516  104058 cni.go:136] 2 nodes found, recommending kindnet
	I1026 01:13:51.638562  104058 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 01:13:51.641791  104058 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1026 01:13:51.641814  104058 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I1026 01:13:51.641820  104058 command_runner.go:130] > Device: 36h/54d	Inode: 804964      Links: 1
	I1026 01:13:51.641827  104058 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1026 01:13:51.641832  104058 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1026 01:13:51.641837  104058 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1026 01:13:51.641843  104058 command_runner.go:130] > Change: 2023-10-26 00:53:54.615237767 +0000
	I1026 01:13:51.641849  104058 command_runner.go:130] >  Birth: 2023-10-26 00:53:54.591235463 +0000
	I1026 01:13:51.641906  104058 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1026 01:13:51.641920  104058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1026 01:13:51.657266  104058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 01:13:51.855249  104058 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1026 01:13:51.859475  104058 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1026 01:13:51.862132  104058 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1026 01:13:51.873002  104058 command_runner.go:130] > daemonset.apps/kindnet configured
	I1026 01:13:51.878733  104058 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:13:51.879079  104058 kapi.go:59] client config for multinode-204768: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.crt", KeyFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.key", CAFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:13:51.879500  104058 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1026 01:13:51.879518  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:51.879528  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:51.879538  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:51.881943  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:51.881972  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:51.881983  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:51.881991  104058 round_trippers.go:580]     Content-Length: 291
	I1026 01:13:51.882000  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:51 GMT
	I1026 01:13:51.882016  104058 round_trippers.go:580]     Audit-Id: 1cb20682-adb5-4455-a8e6-c479d8d6b546
	I1026 01:13:51.882027  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:51.882040  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:51.882053  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:51.882081  104058 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"748d54dc-a561-49f3-94e8-d26ebdbe621b","resourceVersion":"446","creationTimestamp":"2023-10-26T01:12:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1026 01:13:51.882194  104058 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-204768" context rescaled to 1 replicas
	I1026 01:13:51.882231  104058 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1026 01:13:51.884240  104058 out.go:177] * Verifying Kubernetes components...
	I1026 01:13:51.886615  104058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:13:51.897948  104058 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:13:51.898259  104058 kapi.go:59] client config for multinode-204768: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.crt", KeyFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/profiles/multinode-204768/client.key", CAFile:"/home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:13:51.898602  104058 node_ready.go:35] waiting up to 6m0s for node "multinode-204768-m02" to be "Ready" ...
	I1026 01:13:51.898687  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:51.898698  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:51.898706  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:51.898718  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:51.901209  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:51.901227  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:51.901234  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:51.901241  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:51 GMT
	I1026 01:13:51.901250  104058 round_trippers.go:580]     Audit-Id: 42a72297-5a45-4557-b6cd-b45388242aea
	I1026 01:13:51.901259  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:51.901271  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:51.901283  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:51.901436  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"482","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1026 01:13:51.901822  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:51.901836  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:51.901843  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:51.901848  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:51.903697  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:13:51.903718  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:51.903728  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:51.903739  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:51.903748  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:51 GMT
	I1026 01:13:51.903759  104058 round_trippers.go:580]     Audit-Id: 02afeb76-be2b-40d8-830c-d39c636fa1c7
	I1026 01:13:51.903771  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:51.903782  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:51.903870  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"482","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1026 01:13:52.404918  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:52.404949  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:52.404957  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:52.404963  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:52.407290  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:52.407309  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:52.407316  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:52 GMT
	I1026 01:13:52.407321  104058 round_trippers.go:580]     Audit-Id: 9a6a3888-2cdc-482b-9a74-2c2a367b4908
	I1026 01:13:52.407326  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:52.407332  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:52.407336  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:52.407342  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:52.407463  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"482","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1026 01:13:52.905154  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:52.905187  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:52.905196  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:52.905202  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:52.907596  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:52.907616  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:52.907623  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:52.907629  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:52.907634  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:52 GMT
	I1026 01:13:52.907639  104058 round_trippers.go:580]     Audit-Id: d90b0b2a-7864-40e7-a622-e9aed1ed4c76
	I1026 01:13:52.907644  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:52.907650  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:52.907761  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"482","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1026 01:13:53.404307  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:53.404335  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:53.404344  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:53.404350  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:53.406650  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:53.406670  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:53.406677  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:53.406682  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:53.406687  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:53.406694  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:53 GMT
	I1026 01:13:53.406703  104058 round_trippers.go:580]     Audit-Id: dd73a717-48ad-49f2-98d3-4a2670f399f7
	I1026 01:13:53.406716  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:53.406811  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"482","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1026 01:13:53.904392  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:53.904423  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:53.904434  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:53.904440  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:53.906849  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:53.906869  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:53.906876  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:53.906882  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:53.906887  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:53.906892  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:53.906897  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:53 GMT
	I1026 01:13:53.906902  104058 round_trippers.go:580]     Audit-Id: 94126a7c-f22d-4dca-83c6-775b4f60cbd4
	I1026 01:13:53.907017  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"482","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1026 01:13:53.907394  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:13:54.404646  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:54.404670  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:54.404678  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:54.404684  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:54.406984  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:54.407003  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:54.407010  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:54.407015  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:54.407022  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:54 GMT
	I1026 01:13:54.407027  104058 round_trippers.go:580]     Audit-Id: 54843d03-2759-4c79-b5eb-ec11d1bc89ca
	I1026 01:13:54.407033  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:54.407042  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:54.407128  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"482","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1026 01:13:54.904988  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:54.905010  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:54.905018  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:54.905024  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:54.907290  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:54.907314  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:54.907322  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:54.907329  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:54 GMT
	I1026 01:13:54.907334  104058 round_trippers.go:580]     Audit-Id: 897c46d1-bd87-478d-aa83-8e133b30640f
	I1026 01:13:54.907339  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:54.907344  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:54.907349  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:54.907486  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:13:55.405158  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:55.405185  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:55.405195  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:55.405203  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:55.407657  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:55.407683  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:55.407692  104058 round_trippers.go:580]     Audit-Id: bb30522c-a943-4cd3-9f48-73cbb8764a46
	I1026 01:13:55.407700  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:55.407707  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:55.407714  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:55.407722  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:55.407730  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:55 GMT
	I1026 01:13:55.407855  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:13:55.904422  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:55.904446  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:55.904455  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:55.904462  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:55.908272  104058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:13:55.908293  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:55.908304  104058 round_trippers.go:580]     Audit-Id: bc5524c3-9ae2-4746-a563-c69b0e88bca9
	I1026 01:13:55.908312  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:55.908319  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:55.908326  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:55.908334  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:55.908346  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:55 GMT
	I1026 01:13:55.908442  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:13:55.908740  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:13:56.405040  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:56.405060  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:56.405068  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:56.405074  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:56.407483  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:56.407508  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:56.407517  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:56.407525  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:56.407532  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:56.407543  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:56.407554  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:56 GMT
	I1026 01:13:56.407570  104058 round_trippers.go:580]     Audit-Id: b9a27f28-e87e-4911-b59c-7533d8ff9d1a
	I1026 01:13:56.407692  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:13:56.904282  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:56.904310  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:56.904324  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:56.904332  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:56.906934  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:56.906960  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:56.906970  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:56 GMT
	I1026 01:13:56.906979  104058 round_trippers.go:580]     Audit-Id: e9d03f8a-ba10-47f8-9449-da725a19c112
	I1026 01:13:56.906998  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:56.907006  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:56.907025  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:56.907037  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:56.907120  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:13:57.404681  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:57.404708  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:57.404718  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:57.404726  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:57.407148  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:57.407171  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:57.407181  104058 round_trippers.go:580]     Audit-Id: 2f7c8670-b556-44c1-9bef-153a77886756
	I1026 01:13:57.407189  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:57.407197  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:57.407212  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:57.407226  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:57.407235  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:57 GMT
	I1026 01:13:57.407380  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:13:57.904972  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:57.905000  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:57.905012  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:57.905022  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:57.907423  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:57.907444  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:57.907453  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:57.907461  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:57 GMT
	I1026 01:13:57.907468  104058 round_trippers.go:580]     Audit-Id: ccba7f8c-15a0-4910-ae11-7a47a0706d60
	I1026 01:13:57.907475  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:57.907483  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:57.907493  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:57.907609  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:13:58.405274  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:58.405300  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:58.405311  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:58.405319  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:58.407847  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:58.407874  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:58.407885  104058 round_trippers.go:580]     Audit-Id: b35a7ff9-46aa-4254-aa5b-dbd70e04043c
	I1026 01:13:58.407894  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:58.407904  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:58.407918  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:58.407931  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:58.407944  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:58 GMT
	I1026 01:13:58.408053  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:13:58.408335  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:13:58.904446  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:58.904470  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:58.904483  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:58.904490  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:58.906744  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:58.906770  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:58.906776  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:58.906782  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:58.906787  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:58 GMT
	I1026 01:13:58.906792  104058 round_trippers.go:580]     Audit-Id: c5a72a2a-ae38-4c84-ab5b-7e82489b2836
	I1026 01:13:58.906797  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:58.906802  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:58.906876  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:13:59.404464  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:59.404486  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:59.404494  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:59.404500  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:59.407000  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:59.407023  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:59.407034  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:59.407043  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:59.407054  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:59.407064  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:59 GMT
	I1026 01:13:59.407077  104058 round_trippers.go:580]     Audit-Id: 318becf7-1093-4498-b580-497b05f96bf8
	I1026 01:13:59.407087  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:59.407203  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:13:59.904726  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:13:59.904753  104058 round_trippers.go:469] Request Headers:
	I1026 01:13:59.904761  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:13:59.904767  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:13:59.907091  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:13:59.907111  104058 round_trippers.go:577] Response Headers:
	I1026 01:13:59.907118  104058 round_trippers.go:580]     Audit-Id: e04f604f-3df6-459e-b616-f55df6469620
	I1026 01:13:59.907124  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:13:59.907130  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:13:59.907135  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:13:59.907143  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:13:59.907150  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:13:59 GMT
	I1026 01:13:59.907256  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:14:00.404261  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:00.404285  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:00.404293  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:00.404298  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:00.406588  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:00.406610  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:00.406618  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:00.406625  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:00.406631  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:00 GMT
	I1026 01:14:00.406637  104058 round_trippers.go:580]     Audit-Id: 4658d3cb-4ee3-4b06-a00c-6d7f20b10e42
	I1026 01:14:00.406647  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:00.406653  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:00.406761  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:14:00.905227  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:00.905249  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:00.905257  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:00.905264  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:00.907750  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:00.907773  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:00.907782  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:00.907790  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:00.907796  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:00.907804  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:00 GMT
	I1026 01:14:00.907811  104058 round_trippers.go:580]     Audit-Id: 2850063f-349a-4c9b-befe-e05a346f5675
	I1026 01:14:00.907819  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:00.907909  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"498","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1026 01:14:00.908213  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:01.404464  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:01.404486  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:01.404494  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:01.404500  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:01.406667  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:01.406688  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:01.406698  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:01.406706  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:01.406713  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:01.406720  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:01 GMT
	I1026 01:14:01.406728  104058 round_trippers.go:580]     Audit-Id: 26ecd3f3-84e5-4601-9611-19a7a5d45278
	I1026 01:14:01.406736  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:01.406923  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:01.904502  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:01.904522  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:01.904530  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:01.904536  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:01.906822  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:01.906842  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:01.906853  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:01.906860  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:01.906880  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:01.906889  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:01 GMT
	I1026 01:14:01.906901  104058 round_trippers.go:580]     Audit-Id: bacc8a1d-541a-485b-8ad1-c13947ae0e69
	I1026 01:14:01.906911  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:01.907011  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:02.404612  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:02.404642  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:02.404654  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:02.404662  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:02.406983  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:02.407003  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:02.407012  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:02.407020  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:02.407026  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:02 GMT
	I1026 01:14:02.407034  104058 round_trippers.go:580]     Audit-Id: 47c420df-9ab8-4bba-bc61-a18316958fac
	I1026 01:14:02.407042  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:02.407051  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:02.407207  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:02.904755  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:02.904778  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:02.904787  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:02.904793  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:02.907246  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:02.907268  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:02.907277  104058 round_trippers.go:580]     Audit-Id: fac1ac90-7e20-4519-80d5-7c56c630fc43
	I1026 01:14:02.907285  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:02.907292  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:02.907300  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:02.907309  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:02.907318  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:02 GMT
	I1026 01:14:02.907405  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:03.405035  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:03.405059  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:03.405068  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:03.405074  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:03.407290  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:03.407317  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:03.407328  104058 round_trippers.go:580]     Audit-Id: 0b144603-abb0-4bbb-97cd-71f876ef5410
	I1026 01:14:03.407337  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:03.407346  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:03.407353  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:03.407359  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:03.407364  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:03 GMT
	I1026 01:14:03.407539  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:03.407842  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:03.905072  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:03.905094  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:03.905102  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:03.905109  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:03.907435  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:03.907456  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:03.907463  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:03 GMT
	I1026 01:14:03.907468  104058 round_trippers.go:580]     Audit-Id: 1b0a79a1-3816-4e21-b0e5-97f0754ea5be
	I1026 01:14:03.907473  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:03.907479  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:03.907484  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:03.907490  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:03.907561  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:04.405254  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:04.405278  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:04.405286  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:04.405292  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:04.407706  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:04.407724  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:04.407730  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:04 GMT
	I1026 01:14:04.407736  104058 round_trippers.go:580]     Audit-Id: a3797bc3-19fe-4278-a2f7-ebddb81b847c
	I1026 01:14:04.407741  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:04.407746  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:04.407751  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:04.407756  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:04.407855  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:04.904785  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:04.904806  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:04.904815  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:04.904821  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:04.907305  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:04.907322  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:04.907329  104058 round_trippers.go:580]     Audit-Id: b59482ab-ca13-47c9-9d33-ff2a2ceebc1d
	I1026 01:14:04.907334  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:04.907340  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:04.907344  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:04.907350  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:04.907355  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:04 GMT
	I1026 01:14:04.907439  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:05.405241  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:05.405274  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:05.405286  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:05.405294  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:05.407667  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:05.407691  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:05.407702  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:05.407710  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:05.407719  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:05.407732  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:05.407746  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:05 GMT
	I1026 01:14:05.407759  104058 round_trippers.go:580]     Audit-Id: f06ea484-b69d-46e1-be0d-421f4188c2f1
	I1026 01:14:05.407890  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:05.408212  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:05.904334  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:05.904355  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:05.904365  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:05.904371  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:05.906642  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:05.906661  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:05.906670  104058 round_trippers.go:580]     Audit-Id: 586a1eac-139f-4f74-9858-554f284f7074
	I1026 01:14:05.906678  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:05.906686  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:05.906694  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:05.906705  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:05.906716  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:05 GMT
	I1026 01:14:05.906800  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:06.404350  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:06.404374  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:06.404383  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:06.404389  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:06.406804  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:06.406823  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:06.406829  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:06 GMT
	I1026 01:14:06.406835  104058 round_trippers.go:580]     Audit-Id: c49ab7aa-12c2-42f7-b347-4c5cac807acf
	I1026 01:14:06.406840  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:06.406845  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:06.406850  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:06.406857  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:06.407018  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:06.904375  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:06.904397  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:06.904405  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:06.904411  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:06.906804  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:06.906822  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:06.906829  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:06.906835  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:06.906840  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:06.906845  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:06.906850  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:06 GMT
	I1026 01:14:06.906857  104058 round_trippers.go:580]     Audit-Id: e378fbfa-8c18-4754-9548-f74ae492285b
	I1026 01:14:06.906958  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:07.404534  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:07.404555  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:07.404563  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:07.404570  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:07.406982  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:07.407003  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:07.407011  104058 round_trippers.go:580]     Audit-Id: e6f1a017-5924-4dec-af99-1cd265d5a82e
	I1026 01:14:07.407016  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:07.407026  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:07.407031  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:07.407037  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:07.407045  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:07 GMT
	I1026 01:14:07.407142  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:07.904635  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:07.904656  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:07.904664  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:07.904670  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:07.907303  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:07.907325  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:07.907332  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:07.907338  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:07.907343  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:07.907349  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:07 GMT
	I1026 01:14:07.907355  104058 round_trippers.go:580]     Audit-Id: 1b8dc8ca-6f6f-463c-bbc6-f778efd70a69
	I1026 01:14:07.907363  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:07.907545  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:07.907858  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:08.405269  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:08.405295  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:08.405303  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:08.405309  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:08.407894  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:08.407918  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:08.407929  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:08.407936  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:08 GMT
	I1026 01:14:08.407942  104058 round_trippers.go:580]     Audit-Id: 1a67c1e6-9043-4f8a-a3d0-33f5bcb1c024
	I1026 01:14:08.407949  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:08.407956  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:08.407964  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:08.408100  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:08.904571  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:08.904594  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:08.904602  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:08.904608  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:08.906920  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:08.906948  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:08.906958  104058 round_trippers.go:580]     Audit-Id: 9de82db3-a9ae-488b-8209-5d4e9f98221c
	I1026 01:14:08.906966  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:08.906973  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:08.906982  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:08.906994  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:08.907006  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:08 GMT
	I1026 01:14:08.907091  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:09.404792  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:09.404811  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:09.404819  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:09.404826  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:09.407266  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:09.407288  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:09.407295  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:09 GMT
	I1026 01:14:09.407300  104058 round_trippers.go:580]     Audit-Id: 537aeb3e-3593-44f7-9f22-fd5e04c29572
	I1026 01:14:09.407308  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:09.407317  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:09.407324  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:09.407332  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:09.407512  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:09.905127  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:09.905149  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:09.905157  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:09.905163  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:09.907550  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:09.907574  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:09.907583  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:09.907589  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:09.907594  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:09.907601  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:09.907608  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:09 GMT
	I1026 01:14:09.907621  104058 round_trippers.go:580]     Audit-Id: 9a696e7c-3c9c-478f-963a-b24badb9a40b
	I1026 01:14:09.907755  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:09.908084  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:10.404631  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:10.404653  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:10.404661  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:10.404667  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:10.407013  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:10.407041  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:10.407048  104058 round_trippers.go:580]     Audit-Id: 14fa177b-d3e2-4033-b4d1-48edc6d12978
	I1026 01:14:10.407054  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:10.407059  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:10.407064  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:10.407069  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:10.407075  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:10 GMT
	I1026 01:14:10.407171  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:10.904331  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:10.904357  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:10.904366  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:10.904372  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:10.906855  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:10.906882  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:10.906893  104058 round_trippers.go:580]     Audit-Id: 304b4c14-6734-4cc6-b7e8-ca53981a1b85
	I1026 01:14:10.906901  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:10.906909  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:10.906917  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:10.906926  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:10.906934  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:10 GMT
	I1026 01:14:10.907052  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:11.404643  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:11.404669  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:11.404682  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:11.404689  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:11.406858  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:11.406891  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:11.406901  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:11.406912  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:11.406926  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:11 GMT
	I1026 01:14:11.406935  104058 round_trippers.go:580]     Audit-Id: 9080d33c-465d-41b1-997c-8dc038bd6b3c
	I1026 01:14:11.406948  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:11.406961  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:11.407078  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:11.905200  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:11.905235  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:11.905244  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:11.905250  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:11.907626  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:11.907646  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:11.907653  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:11.907658  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:11.907664  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:11 GMT
	I1026 01:14:11.907669  104058 round_trippers.go:580]     Audit-Id: 4b420be4-57a5-43ea-9d67-305d5297d92b
	I1026 01:14:11.907674  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:11.907683  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:11.907774  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:12.405180  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:12.405209  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:12.405223  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:12.405231  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:12.407674  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:12.407704  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:12.407713  104058 round_trippers.go:580]     Audit-Id: 4922f4aa-7b8b-4885-b2c6-afc526a92cfc
	I1026 01:14:12.407719  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:12.407724  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:12.407729  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:12.407739  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:12.407745  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:12 GMT
	I1026 01:14:12.407911  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:12.408231  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:12.904379  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:12.904403  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:12.904414  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:12.904420  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:12.906721  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:12.906746  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:12.906756  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:12.906766  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:12.906774  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:12.906784  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:12.906800  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:12 GMT
	I1026 01:14:12.906809  104058 round_trippers.go:580]     Audit-Id: 7aa6f214-d62a-40e2-8eed-8637948ab680
	I1026 01:14:12.906919  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:13.404419  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:13.404441  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:13.404450  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:13.404455  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:13.406735  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:13.406756  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:13.406764  104058 round_trippers.go:580]     Audit-Id: 7abde13d-f349-4356-a5c9-59b646cc6f7a
	I1026 01:14:13.406772  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:13.406780  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:13.406788  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:13.406796  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:13.406808  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:13 GMT
	I1026 01:14:13.406978  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:13.904370  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:13.904394  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:13.904402  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:13.904409  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:13.906853  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:13.906880  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:13.906891  104058 round_trippers.go:580]     Audit-Id: 9679e4b2-3509-476f-a57e-d91e4b59405a
	I1026 01:14:13.906899  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:13.906908  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:13.906917  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:13.906933  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:13.906942  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:13 GMT
	I1026 01:14:13.907027  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:14.404649  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:14.404684  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:14.404697  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:14.404708  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:14.407157  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:14.407181  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:14.407190  104058 round_trippers.go:580]     Audit-Id: 8a6bd7b1-1ca5-42bd-a83a-c27b3eada270
	I1026 01:14:14.407199  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:14.407208  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:14.407215  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:14.407224  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:14.407235  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:14 GMT
	I1026 01:14:14.407356  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:14.905228  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:14.905254  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:14.905269  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:14.905279  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:14.907569  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:14.907596  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:14.907606  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:14.907614  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:14.907622  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:14.907713  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:14.907721  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:14 GMT
	I1026 01:14:14.907728  104058 round_trippers.go:580]     Audit-Id: 9abb99bc-f255-485a-a4b7-d4b511747cef
	I1026 01:14:14.907829  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:14.908118  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:15.404449  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:15.404472  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:15.404480  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:15.404486  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:15.406874  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:15.406900  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:15.406909  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:15.406914  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:15.406921  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:15 GMT
	I1026 01:14:15.406927  104058 round_trippers.go:580]     Audit-Id: ff2aa723-b5fb-4c9f-9a15-bb87f400bbdf
	I1026 01:14:15.406932  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:15.406939  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:15.407047  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:15.904585  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:15.904609  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:15.904617  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:15.904623  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:15.906900  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:15.906922  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:15.906929  104058 round_trippers.go:580]     Audit-Id: 7608bbcd-f7f3-493d-afae-0fd9e6741a02
	I1026 01:14:15.906935  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:15.906940  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:15.906946  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:15.906951  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:15.906956  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:15 GMT
	I1026 01:14:15.907090  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:16.404715  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:16.404737  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:16.404748  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:16.404754  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:16.407029  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:16.407055  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:16.407066  104058 round_trippers.go:580]     Audit-Id: ea12b2c9-84f6-4fcf-a68a-ae6947fd6c5c
	I1026 01:14:16.407074  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:16.407082  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:16.407091  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:16.407099  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:16.407111  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:16 GMT
	I1026 01:14:16.407270  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:16.904781  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:16.904805  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:16.904813  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:16.904819  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:16.907188  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:16.907212  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:16.907223  104058 round_trippers.go:580]     Audit-Id: 2500690a-2aa0-4fb4-9dbb-851de18d82e7
	I1026 01:14:16.907228  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:16.907234  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:16.907239  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:16.907246  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:16.907253  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:16 GMT
	I1026 01:14:16.907351  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:17.405028  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:17.405049  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:17.405057  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:17.405063  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:17.407294  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:17.407312  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:17.407318  104058 round_trippers.go:580]     Audit-Id: 7a55c6bc-e3b5-44c4-a08e-0232f5368fcf
	I1026 01:14:17.407324  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:17.407330  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:17.407335  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:17.407341  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:17.407346  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:17 GMT
	I1026 01:14:17.407464  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:17.407752  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:17.905203  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:17.905226  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:17.905235  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:17.905241  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:17.907636  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:17.907656  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:17.907663  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:17.907668  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:17.907673  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:17.907678  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:17.907684  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:17 GMT
	I1026 01:14:17.907689  104058 round_trippers.go:580]     Audit-Id: 33b66821-954a-4e3a-a05d-c17a5e763774
	I1026 01:14:17.907766  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:18.404285  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:18.404310  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:18.404319  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:18.404330  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:18.406626  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:18.406651  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:18.406661  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:18 GMT
	I1026 01:14:18.406669  104058 round_trippers.go:580]     Audit-Id: f7282281-b938-466c-bbe8-abacb0a52305
	I1026 01:14:18.406679  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:18.406692  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:18.406699  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:18.406706  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:18.406820  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:18.905181  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:18.905205  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:18.905214  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:18.905219  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:18.907605  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:18.907631  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:18.907639  104058 round_trippers.go:580]     Audit-Id: 50ac3ac5-fe9d-49a2-ba50-90c3f1ee4aed
	I1026 01:14:18.907645  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:18.907650  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:18.907655  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:18.907660  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:18.907665  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:18 GMT
	I1026 01:14:18.907767  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:19.404461  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:19.404489  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:19.404501  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:19.404510  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:19.406829  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:19.406860  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:19.406873  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:19.406883  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:19 GMT
	I1026 01:14:19.406894  104058 round_trippers.go:580]     Audit-Id: fa0f7fea-7d4c-4487-a6f6-0d67cc6931e0
	I1026 01:14:19.406904  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:19.406915  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:19.406932  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:19.407047  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:19.904579  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:19.904611  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:19.904618  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:19.904624  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:19.906832  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:19.906859  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:19.906869  104058 round_trippers.go:580]     Audit-Id: 05cc412a-e3ad-451d-aba7-b1372d4f2f8e
	I1026 01:14:19.906878  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:19.906887  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:19.906904  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:19.906916  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:19.906928  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:19 GMT
	I1026 01:14:19.907015  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:19.907307  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:20.404975  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:20.404996  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:20.405004  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:20.405014  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:20.407384  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:20.407406  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:20.407415  104058 round_trippers.go:580]     Audit-Id: 08078bd1-6064-4821-b5a2-f8a8437f2835
	I1026 01:14:20.407423  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:20.407430  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:20.407437  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:20.407445  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:20.407452  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:20 GMT
	I1026 01:14:20.407568  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:20.905175  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:20.905196  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:20.905204  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:20.905210  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:20.907577  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:20.907603  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:20.907613  104058 round_trippers.go:580]     Audit-Id: eeecd38b-4f8a-4bc2-88d4-ac1c92775f2f
	I1026 01:14:20.907622  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:20.907631  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:20.907638  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:20.907646  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:20.907655  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:20 GMT
	I1026 01:14:20.907757  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:21.404372  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:21.404403  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:21.404415  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:21.404425  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:21.406913  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:21.406938  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:21.406945  104058 round_trippers.go:580]     Audit-Id: 16e34f7f-d939-43d6-8a97-8847e55a1a01
	I1026 01:14:21.406951  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:21.406957  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:21.406966  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:21.406974  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:21.406982  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:21 GMT
	I1026 01:14:21.407138  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:21.904723  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:21.904747  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:21.904755  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:21.904761  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:21.907003  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:21.907027  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:21.907036  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:21.907043  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:21.907051  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:21 GMT
	I1026 01:14:21.907058  104058 round_trippers.go:580]     Audit-Id: 5537eb37-0bd2-41a8-a2fb-b963559df8b8
	I1026 01:14:21.907066  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:21.907076  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:21.907165  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:21.907445  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:22.404872  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:22.404901  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:22.404912  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:22.404919  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:22.407497  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:22.407517  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:22.407523  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:22.407528  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:22.407533  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:22 GMT
	I1026 01:14:22.407539  104058 round_trippers.go:580]     Audit-Id: aa02bd77-7402-4101-b3a0-37aa8eb8675a
	I1026 01:14:22.407544  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:22.407549  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:22.407686  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:22.905312  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:22.905335  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:22.905348  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:22.905358  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:22.907632  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:22.907659  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:22.907669  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:22.907678  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:22.907686  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:22.907695  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:22.907704  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:22 GMT
	I1026 01:14:22.907713  104058 round_trippers.go:580]     Audit-Id: 95ad2778-6e00-459c-a16f-21f608f51f18
	I1026 01:14:22.907814  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:23.405151  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:23.405171  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:23.405179  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:23.405184  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:23.407422  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:23.407443  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:23.407449  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:23.407457  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:23.407468  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:23.407478  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:23.407486  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:23 GMT
	I1026 01:14:23.407495  104058 round_trippers.go:580]     Audit-Id: feeb09cc-3766-4f6d-be9e-9ce3e537b1a2
	I1026 01:14:23.407690  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:23.904284  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:23.904308  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:23.904317  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:23.904322  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:23.906523  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:23.906548  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:23.906559  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:23.906568  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:23.906576  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:23.906581  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:23 GMT
	I1026 01:14:23.906587  104058 round_trippers.go:580]     Audit-Id: bfd663d1-19a2-40ad-805f-ff44c1506c65
	I1026 01:14:23.906592  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:23.906673  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:24.404247  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:24.404271  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:24.404282  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:24.404290  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:24.406638  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:24.406660  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:24.406667  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:24.406672  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:24 GMT
	I1026 01:14:24.406678  104058 round_trippers.go:580]     Audit-Id: d5b61a00-00e0-466f-ad8e-375443050a0e
	I1026 01:14:24.406687  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:24.406696  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:24.406705  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:24.406827  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:24.407241  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:24.904392  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:24.904415  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:24.904423  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:24.904429  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:24.906642  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:24.906661  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:24.906668  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:24.906674  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:24.906679  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:24.906686  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:24.906695  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:24 GMT
	I1026 01:14:24.906707  104058 round_trippers.go:580]     Audit-Id: 04791589-c7ef-49cb-b4bc-ab2617f1a4ae
	I1026 01:14:24.906823  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:25.404671  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:25.404700  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:25.404712  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:25.404719  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:25.406909  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:25.406929  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:25.406936  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:25.406941  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:25 GMT
	I1026 01:14:25.406947  104058 round_trippers.go:580]     Audit-Id: 5ba0a45c-d31d-48d1-b4e7-6720955cb6da
	I1026 01:14:25.406952  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:25.406957  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:25.406962  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:25.407084  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:25.904724  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:25.904765  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:25.904777  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:25.904787  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:25.907146  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:25.907164  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:25.907171  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:25.907177  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:25 GMT
	I1026 01:14:25.907184  104058 round_trippers.go:580]     Audit-Id: d19e9802-d12f-41e0-9c8f-b12c74d152d8
	I1026 01:14:25.907190  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:25.907195  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:25.907200  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:25.907271  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:26.404889  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:26.404909  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:26.404917  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:26.404924  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:26.407233  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:26.407252  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:26.407258  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:26 GMT
	I1026 01:14:26.407264  104058 round_trippers.go:580]     Audit-Id: b7c6dffa-d3d1-4127-8fb6-813e1e61a18b
	I1026 01:14:26.407269  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:26.407274  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:26.407279  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:26.407284  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:26.407366  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:26.407642  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:26.904989  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:26.905017  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:26.905029  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:26.905039  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:26.907314  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:26.907341  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:26.907349  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:26.907354  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:26.907359  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:26.907365  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:26.907370  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:26 GMT
	I1026 01:14:26.907375  104058 round_trippers.go:580]     Audit-Id: eb363e8f-ec2c-4ac2-9fed-294bd5fd485c
	I1026 01:14:26.907459  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:27.404822  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:27.404844  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:27.404853  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:27.404859  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:27.407282  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:27.407307  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:27.407317  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:27.407327  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:27.407344  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:27 GMT
	I1026 01:14:27.407353  104058 round_trippers.go:580]     Audit-Id: 32ddc3de-5941-425e-92ce-c7f65d75b763
	I1026 01:14:27.407361  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:27.407369  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:27.407537  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:27.905137  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:27.905156  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:27.905165  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:27.905171  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:27.907459  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:27.907480  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:27.907487  104058 round_trippers.go:580]     Audit-Id: f90289ee-29a6-43af-9ad8-3c92aa336482
	I1026 01:14:27.907493  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:27.907498  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:27.907503  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:27.907508  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:27.907514  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:27 GMT
	I1026 01:14:27.907662  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:28.405132  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:28.405152  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:28.405162  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:28.405169  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:28.407521  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:28.407547  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:28.407558  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:28.407568  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:28 GMT
	I1026 01:14:28.407576  104058 round_trippers.go:580]     Audit-Id: aa99072e-e43b-4a85-83be-413c36fc7590
	I1026 01:14:28.407585  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:28.407595  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:28.407607  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:28.407715  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:28.408056  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:28.904255  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:28.904290  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:28.904299  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:28.904305  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:28.906652  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:28.906679  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:28.906689  104058 round_trippers.go:580]     Audit-Id: 38536ce0-792a-444e-a742-c12e96da5a24
	I1026 01:14:28.906697  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:28.906706  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:28.906713  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:28.906722  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:28.906731  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:28 GMT
	I1026 01:14:28.906837  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:29.404428  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:29.404453  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:29.404461  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:29.404467  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:29.406771  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:29.406796  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:29.406803  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:29.406809  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:29.406814  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:29.406820  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:29 GMT
	I1026 01:14:29.406825  104058 round_trippers.go:580]     Audit-Id: 612b5d74-bf67-4a77-ab02-9101df44eaeb
	I1026 01:14:29.406842  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:29.406963  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:29.905179  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:29.905206  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:29.905214  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:29.905220  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:29.907301  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:29.907321  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:29.907328  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:29.907335  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:29 GMT
	I1026 01:14:29.907344  104058 round_trippers.go:580]     Audit-Id: 867d3391-ba9f-45a8-ade8-c336b120862d
	I1026 01:14:29.907353  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:29.907362  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:29.907373  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:29.907471  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:30.404485  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:30.404512  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:30.404524  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:30.404534  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:30.407018  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:30.407045  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:30.407056  104058 round_trippers.go:580]     Audit-Id: 41e18479-12d6-4c6d-963f-1fa4607858b2
	I1026 01:14:30.407066  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:30.407078  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:30.407084  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:30.407090  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:30.407097  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:30 GMT
	I1026 01:14:30.407210  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:30.904755  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:30.904777  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:30.904786  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:30.904792  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:30.907181  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:30.907203  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:30.907212  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:30.907219  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:30.907227  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:30.907234  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:30 GMT
	I1026 01:14:30.907243  104058 round_trippers.go:580]     Audit-Id: a0d2cdc3-327d-446a-bed1-2c8e9cec79d7
	I1026 01:14:30.907252  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:30.907353  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:30.907659  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:31.405018  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:31.405039  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:31.405047  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:31.405053  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:31.407374  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:31.407396  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:31.407404  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:31.407410  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:31 GMT
	I1026 01:14:31.407415  104058 round_trippers.go:580]     Audit-Id: 9c1cec15-57fe-4986-9e85-a91f6f92bada
	I1026 01:14:31.407420  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:31.407429  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:31.407446  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:31.407568  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:31.905073  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:31.905095  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:31.905105  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:31.905113  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:31.907271  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:31.907293  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:31.907303  104058 round_trippers.go:580]     Audit-Id: dff2918d-818a-4656-9223-8cbc45867450
	I1026 01:14:31.907311  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:31.907319  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:31.907327  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:31.907339  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:31.907347  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:31 GMT
	I1026 01:14:31.907428  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:32.405031  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:32.405051  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:32.405059  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:32.405065  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:32.407449  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:32.407470  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:32.407477  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:32.407483  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:32.407488  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:32.407494  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:32 GMT
	I1026 01:14:32.407499  104058 round_trippers.go:580]     Audit-Id: e5b906ad-2d32-4f91-988f-b621bfb00619
	I1026 01:14:32.407510  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:32.407672  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:32.904266  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:32.904289  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:32.904302  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:32.904308  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:32.906785  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:32.906814  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:32.906823  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:32.906831  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:32.906838  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:32 GMT
	I1026 01:14:32.906845  104058 round_trippers.go:580]     Audit-Id: ba1eab22-934c-4c89-a749-e4e6ac4b0874
	I1026 01:14:32.906852  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:32.906860  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:32.906957  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:33.404381  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:33.404403  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:33.404412  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:33.404418  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:33.406685  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:33.406712  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:33.406721  104058 round_trippers.go:580]     Audit-Id: 3ca074b1-97e9-4156-b35f-dae113466a3b
	I1026 01:14:33.406729  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:33.406735  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:33.406743  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:33.406751  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:33.406760  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:33 GMT
	I1026 01:14:33.406870  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:33.407183  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:33.904362  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:33.904383  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:33.904391  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:33.904399  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:33.906731  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:33.906753  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:33.906762  104058 round_trippers.go:580]     Audit-Id: dd565410-135d-4eaf-a78d-97337ed89d3d
	I1026 01:14:33.906769  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:33.906776  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:33.906783  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:33.906791  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:33.906799  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:33 GMT
	I1026 01:14:33.906899  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:34.404476  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:34.404504  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:34.404516  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:34.404533  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:34.406894  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:34.406917  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:34.406936  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:34.406944  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:34 GMT
	I1026 01:14:34.406950  104058 round_trippers.go:580]     Audit-Id: c2559271-94e8-4127-bc1b-6d0bebd87bba
	I1026 01:14:34.406957  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:34.406962  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:34.406967  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:34.407057  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:34.904714  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:34.904738  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:34.904746  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:34.904752  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:34.907091  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:34.907112  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:34.907120  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:34.907126  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:34.907131  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:34.907136  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:34.907143  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:34 GMT
	I1026 01:14:34.907148  104058 round_trippers.go:580]     Audit-Id: 3ed09322-f473-4943-9554-e208dd1c077f
	I1026 01:14:34.907245  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:35.405006  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:35.405029  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:35.405039  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:35.405048  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:35.407354  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:35.407381  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:35.407392  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:35 GMT
	I1026 01:14:35.407400  104058 round_trippers.go:580]     Audit-Id: e4ea8fba-8733-4814-9a46-863a2f93481a
	I1026 01:14:35.407408  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:35.407416  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:35.407430  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:35.407442  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:35.407620  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:35.407911  104058 node_ready.go:58] node "multinode-204768-m02" has status "Ready":"False"
	I1026 01:14:35.905185  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:35.905206  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:35.905214  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:35.905221  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:35.907460  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:35.907483  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:35.907490  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:35.907496  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:35.907501  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:35.907506  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:35.907511  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:35 GMT
	I1026 01:14:35.907517  104058 round_trippers.go:580]     Audit-Id: 06111bbf-4ebb-4102-9620-4735a7c8b208
	I1026 01:14:35.907610  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"506","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1026 01:14:36.405276  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:36.405299  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.405310  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.405318  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.407663  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:36.407683  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.407692  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.407697  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.407703  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.407708  104058 round_trippers.go:580]     Audit-Id: 5c1d6b7b-b2a8-4044-9ee9-e97493fba3f0
	I1026 01:14:36.407713  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.407718  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.407830  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"550","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1026 01:14:36.408143  104058 node_ready.go:49] node "multinode-204768-m02" has status "Ready":"True"
	I1026 01:14:36.408158  104058 node_ready.go:38] duration metric: took 44.509536427s waiting for node "multinode-204768-m02" to be "Ready" ...
	I1026 01:14:36.408167  104058 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:14:36.408224  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1026 01:14:36.408231  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.408238  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.408245  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.411227  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:36.411249  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.411260  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.411269  104058 round_trippers.go:580]     Audit-Id: b137f522-d9b3-490a-ad29-d0d01a50c163
	I1026 01:14:36.411279  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.411287  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.411293  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.411302  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.411837  104058 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"550"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dccqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"40c339fe-ec4b-429f-afa8-f305c33e4344","resourceVersion":"442","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1a2c8df7-8c76-46f4-b773-924c84f49f5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2c8df7-8c76-46f4-b773-924c84f49f5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I1026 01:14:36.413906  104058 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dccqq" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:36.413973  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dccqq
	I1026 01:14:36.413981  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.413988  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.413994  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.416053  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:36.416075  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.416085  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.416094  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.416103  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.416113  104058 round_trippers.go:580]     Audit-Id: 51c5e66d-5834-4fbe-a89c-842906ff5984
	I1026 01:14:36.416125  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.416137  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.416246  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dccqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"40c339fe-ec4b-429f-afa8-f305c33e4344","resourceVersion":"442","creationTimestamp":"2023-10-26T01:13:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1a2c8df7-8c76-46f4-b773-924c84f49f5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2c8df7-8c76-46f4-b773-924c84f49f5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1026 01:14:36.416760  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:14:36.416779  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.416786  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.416795  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.418696  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:14:36.418719  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.418729  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.418737  104058 round_trippers.go:580]     Audit-Id: 9d2a8494-a611-4e6b-9fdf-79fe14d0e08c
	I1026 01:14:36.418745  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.418754  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.418762  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.418775  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.418932  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:14:36.419257  104058 pod_ready.go:92] pod "coredns-5dd5756b68-dccqq" in "kube-system" namespace has status "Ready":"True"
	I1026 01:14:36.419272  104058 pod_ready.go:81] duration metric: took 5.346772ms waiting for pod "coredns-5dd5756b68-dccqq" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:36.419281  104058 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:36.419331  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-204768
	I1026 01:14:36.419340  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.419346  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.419352  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.421129  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:14:36.421151  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.421162  104058 round_trippers.go:580]     Audit-Id: 569df016-5812-46eb-ad99-508695b0b766
	I1026 01:14:36.421171  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.421180  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.421192  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.421201  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.421214  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.421317  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-204768","namespace":"kube-system","uid":"c9c95bc6-cbbf-4412-a34e-68fa705cebd3","resourceVersion":"313","creationTimestamp":"2023-10-26T01:12:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"e8d07c850007bf81e9202b3f7ccc144c","kubernetes.io/config.mirror":"e8d07c850007bf81e9202b3f7ccc144c","kubernetes.io/config.seen":"2023-10-26T01:12:46.401057131Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:12:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1026 01:14:36.421796  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:14:36.421813  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.421820  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.421826  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.423563  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:14:36.423580  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.423589  104058 round_trippers.go:580]     Audit-Id: e40cea6d-0d18-4458-a326-88985e5a42f1
	I1026 01:14:36.423597  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.423604  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.423612  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.423621  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.423631  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.423733  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:14:36.424026  104058 pod_ready.go:92] pod "etcd-multinode-204768" in "kube-system" namespace has status "Ready":"True"
	I1026 01:14:36.424041  104058 pod_ready.go:81] duration metric: took 4.753628ms waiting for pod "etcd-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:36.424061  104058 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:36.424123  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-204768
	I1026 01:14:36.424132  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.424142  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.424153  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.425935  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:14:36.425958  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.425968  104058 round_trippers.go:580]     Audit-Id: e7c3a91b-9b4a-427c-8230-9c578ce88038
	I1026 01:14:36.425980  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.425988  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.425997  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.426007  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.426021  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.426161  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-204768","namespace":"kube-system","uid":"996138a2-c8e3-473f-8adc-cea5c13e9400","resourceVersion":"315","creationTimestamp":"2023-10-26T01:12:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"4fdd3118b5471cf161cd04b0bf3d7dfa","kubernetes.io/config.mirror":"4fdd3118b5471cf161cd04b0bf3d7dfa","kubernetes.io/config.seen":"2023-10-26T01:12:52.092072793Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:12:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1026 01:14:36.426558  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:14:36.426569  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.426576  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.426583  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.428265  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:14:36.428286  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.428297  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.428304  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.428312  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.428321  104058 round_trippers.go:580]     Audit-Id: 67f4c6ef-8c3f-4148-b368-8a9c0dc9f087
	I1026 01:14:36.428334  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.428344  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.428440  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:14:36.428726  104058 pod_ready.go:92] pod "kube-apiserver-multinode-204768" in "kube-system" namespace has status "Ready":"True"
	I1026 01:14:36.428744  104058 pod_ready.go:81] duration metric: took 4.672545ms waiting for pod "kube-apiserver-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:36.428755  104058 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:36.428806  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-204768
	I1026 01:14:36.428816  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.428826  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.428837  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.430698  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:14:36.430712  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.430718  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.430723  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.430728  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.430733  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.430738  104058 round_trippers.go:580]     Audit-Id: ba5c3383-e438-4e03-b55d-f59d97ad82af
	I1026 01:14:36.430744  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.430888  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-204768","namespace":"kube-system","uid":"29d45769-f580-4533-b706-49744a365a37","resourceVersion":"319","creationTimestamp":"2023-10-26T01:12:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c5ab0e7c91688fbde32e6aea37a6a4f1","kubernetes.io/config.mirror":"c5ab0e7c91688fbde32e6aea37a6a4f1","kubernetes.io/config.seen":"2023-10-26T01:12:52.092074804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:12:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1026 01:14:36.431291  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:14:36.431304  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.431310  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.431320  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.433264  104058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:14:36.433277  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.433283  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.433288  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.433293  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.433300  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.433308  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.433313  104058 round_trippers.go:580]     Audit-Id: cd7a6b10-019a-47b1-8d3d-66fc8c4ad177
	I1026 01:14:36.433417  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:14:36.433729  104058 pod_ready.go:92] pod "kube-controller-manager-multinode-204768" in "kube-system" namespace has status "Ready":"True"
	I1026 01:14:36.433744  104058 pod_ready.go:81] duration metric: took 4.981061ms waiting for pod "kube-controller-manager-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:36.433753  104058 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hkfhh" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:36.606119  104058 request.go:629] Waited for 172.313702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfhh
	I1026 01:14:36.606192  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfhh
	I1026 01:14:36.606199  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.606210  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.606225  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.608372  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:36.608391  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.608400  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.608409  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.608416  104058 round_trippers.go:580]     Audit-Id: b36a569f-42b3-4d4a-8aa0-e92e59aef30e
	I1026 01:14:36.608424  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.608432  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.608437  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.608599  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hkfhh","generateName":"kube-proxy-","namespace":"kube-system","uid":"1fb5ef2f-82a6-48b5-bb1b-9f7461ed90ed","resourceVersion":"376","creationTimestamp":"2023-10-26T01:13:04Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f5eb1b01-7f36-41da-8e2b-7cffcba996d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f5eb1b01-7f36-41da-8e2b-7cffcba996d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1026 01:14:36.806336  104058 request.go:629] Waited for 197.291715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:14:36.806407  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:14:36.806415  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:36.806428  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:36.806443  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:36.808716  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:36.808742  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:36.808751  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:36.808761  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:36 GMT
	I1026 01:14:36.808770  104058 round_trippers.go:580]     Audit-Id: fba3c06d-f40c-4334-922b-761950e6afed
	I1026 01:14:36.808778  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:36.808785  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:36.808790  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:36.808893  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:14:36.809207  104058 pod_ready.go:92] pod "kube-proxy-hkfhh" in "kube-system" namespace has status "Ready":"True"
	I1026 01:14:36.809223  104058 pod_ready.go:81] duration metric: took 375.464208ms waiting for pod "kube-proxy-hkfhh" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:36.809232  104058 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q5x8q" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:37.005660  104058 request.go:629] Waited for 196.354043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5x8q
	I1026 01:14:37.005747  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5x8q
	I1026 01:14:37.005752  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:37.005760  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:37.005769  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:37.008060  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:37.008082  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:37.008092  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:37.008100  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:37 GMT
	I1026 01:14:37.008108  104058 round_trippers.go:580]     Audit-Id: 1364f9d4-3f56-4c01-94c4-096de925fef2
	I1026 01:14:37.008115  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:37.008123  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:37.008131  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:37.008248  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q5x8q","generateName":"kube-proxy-","namespace":"kube-system","uid":"93c7892a-7738-4d02-b5f3-3de3162b6af6","resourceVersion":"520","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f5eb1b01-7f36-41da-8e2b-7cffcba996d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f5eb1b01-7f36-41da-8e2b-7cffcba996d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1026 01:14:37.206047  104058 request.go:629] Waited for 197.367193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:37.206099  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768-m02
	I1026 01:14:37.206104  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:37.206111  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:37.206123  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:37.208555  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:37.208581  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:37.208593  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:37.208601  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:37 GMT
	I1026 01:14:37.208607  104058 round_trippers.go:580]     Audit-Id: 9a5c16be-e497-4768-a14c-f2134bc109e3
	I1026 01:14:37.208616  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:37.208624  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:37.208630  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:37.208755  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768-m02","uid":"bc080356-ce82-4550-944d-c16efa759807","resourceVersion":"550","creationTimestamp":"2023-10-26T01:13:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1026 01:14:37.209084  104058 pod_ready.go:92] pod "kube-proxy-q5x8q" in "kube-system" namespace has status "Ready":"True"
	I1026 01:14:37.209102  104058 pod_ready.go:81] duration metric: took 399.856526ms waiting for pod "kube-proxy-q5x8q" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:37.209115  104058 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:37.405382  104058 request.go:629] Waited for 196.183815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-204768
	I1026 01:14:37.405451  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-204768
	I1026 01:14:37.405463  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:37.405475  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:37.405492  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:37.407812  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:37.407837  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:37.407847  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:37.407856  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:37.407865  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:37.407873  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:37.407882  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:37 GMT
	I1026 01:14:37.407891  104058 round_trippers.go:580]     Audit-Id: ffaafa1c-0591-49fd-875d-c68a8490aaae
	I1026 01:14:37.408051  104058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-204768","namespace":"kube-system","uid":"9760c99d-332a-47cd-87ba-bb616722ecef","resourceVersion":"410","creationTimestamp":"2023-10-26T01:12:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"979980edfd50477450614c13b844007d","kubernetes.io/config.mirror":"979980edfd50477450614c13b844007d","kubernetes.io/config.seen":"2023-10-26T01:12:52.092064230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:12:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1026 01:14:37.605823  104058 request.go:629] Waited for 197.34716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:14:37.605915  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-204768
	I1026 01:14:37.605927  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:37.605936  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:37.605948  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:37.608213  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:37.608233  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:37.608242  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:37 GMT
	I1026 01:14:37.608249  104058 round_trippers.go:580]     Audit-Id: f27057c9-2bad-4781-a2ad-78cee3cdad19
	I1026 01:14:37.608259  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:37.608267  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:37.608280  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:37.608289  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:37.608391  104058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-26T01:12:49Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1026 01:14:37.608784  104058 pod_ready.go:92] pod "kube-scheduler-multinode-204768" in "kube-system" namespace has status "Ready":"True"
	I1026 01:14:37.608808  104058 pod_ready.go:81] duration metric: took 399.686291ms waiting for pod "kube-scheduler-multinode-204768" in "kube-system" namespace to be "Ready" ...
	I1026 01:14:37.608820  104058 pod_ready.go:38] duration metric: took 1.20064216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:14:37.608841  104058 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 01:14:37.608884  104058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:14:37.619868  104058 system_svc.go:56] duration metric: took 11.020733ms WaitForService to wait for kubelet.
	I1026 01:14:37.619890  104058 kubeadm.go:581] duration metric: took 45.737629932s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1026 01:14:37.619908  104058 node_conditions.go:102] verifying NodePressure condition ...
	I1026 01:14:37.806349  104058 request.go:629] Waited for 186.3605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1026 01:14:37.806399  104058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1026 01:14:37.806404  104058 round_trippers.go:469] Request Headers:
	I1026 01:14:37.806412  104058 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:14:37.806418  104058 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:14:37.808870  104058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:14:37.808890  104058 round_trippers.go:577] Response Headers:
	I1026 01:14:37.808897  104058 round_trippers.go:580]     Audit-Id: c338c673-1297-45a8-b548-40fd44a844f7
	I1026 01:14:37.808902  104058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1026 01:14:37.808907  104058 round_trippers.go:580]     Content-Type: application/json
	I1026 01:14:37.808912  104058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 02685de0-9aa9-4eea-a4dd-803610b90a63
	I1026 01:14:37.808917  104058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1f3ca53e-92fa-457a-b10d-e194121979e9
	I1026 01:14:37.808922  104058 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:14:37 GMT
	I1026 01:14:37.809166  104058 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"551"},"items":[{"metadata":{"name":"multinode-204768","uid":"dac4ff1a-3f3f-45e1-a7af-cfe2059d567e","resourceVersion":"422","creationTimestamp":"2023-10-26T01:12:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-204768","kubernetes.io/os":"linux","minikube.k8s.io/commit":"af1d352f1030f8f3ea7f97e311e7fe82ef319942","minikube.k8s.io/name":"multinode-204768","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_26T01_12_52_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I1026 01:14:37.809651  104058 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 01:14:37.809666  104058 node_conditions.go:123] node cpu capacity is 8
	I1026 01:14:37.809731  104058 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 01:14:37.809737  104058 node_conditions.go:123] node cpu capacity is 8
	I1026 01:14:37.809742  104058 node_conditions.go:105] duration metric: took 189.829427ms to run NodePressure ...
	I1026 01:14:37.809754  104058 start.go:228] waiting for startup goroutines ...
	I1026 01:14:37.809787  104058 start.go:242] writing updated cluster config ...
	I1026 01:14:37.810058  104058 ssh_runner.go:195] Run: rm -f paused
	I1026 01:14:37.857109  104058 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1026 01:14:37.860671  104058 out.go:177] * Done! kubectl is now configured to use "multinode-204768" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 26 01:13:36 multinode-204768 crio[963]: time="2023-10-26 01:13:36.200802286Z" level=info msg="Starting container: 459a7439d8b1a2bc8f7fd81f77f24bb9ff90be12b58e1bd93b4f2fa331b52e1f" id=bec386c5-c825-4eb8-ae17-7fb547f3e114 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 01:13:36 multinode-204768 crio[963]: time="2023-10-26 01:13:36.201359300Z" level=info msg="Created container 1515f0042891a206ade8aadd80803fa68dbc98308fe6115a7153ae531e8ddf44: kube-system/storage-provisioner/storage-provisioner" id=02d5d2f5-6159-46f3-9d40-6f99b48e2eda name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 01:13:36 multinode-204768 crio[963]: time="2023-10-26 01:13:36.201970935Z" level=info msg="Starting container: 1515f0042891a206ade8aadd80803fa68dbc98308fe6115a7153ae531e8ddf44" id=46139a58-4ff1-4dc6-bb19-b9cd3a5ee22b name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 01:13:36 multinode-204768 crio[963]: time="2023-10-26 01:13:36.212642051Z" level=info msg="Started container" PID=2351 containerID=459a7439d8b1a2bc8f7fd81f77f24bb9ff90be12b58e1bd93b4f2fa331b52e1f description=kube-system/coredns-5dd5756b68-dccqq/coredns id=bec386c5-c825-4eb8-ae17-7fb547f3e114 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc5fd57d08998bcc15a240bf26ded296a95b2347749e801df0fec0690feb9538
	Oct 26 01:13:36 multinode-204768 crio[963]: time="2023-10-26 01:13:36.214937344Z" level=info msg="Started container" PID=2352 containerID=1515f0042891a206ade8aadd80803fa68dbc98308fe6115a7153ae531e8ddf44 description=kube-system/storage-provisioner/storage-provisioner id=46139a58-4ff1-4dc6-bb19-b9cd3a5ee22b name=/runtime.v1.RuntimeService/StartContainer sandboxID=05ff347b7c165af019f2694e061e0e4df6ca2b36ac883891ab7100f7121f3883
	Oct 26 01:14:38 multinode-204768 crio[963]: time="2023-10-26 01:14:38.870841512Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-lvqzv/POD" id=42270880-48b5-43cf-8f34-3c1e701dca19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 01:14:38 multinode-204768 crio[963]: time="2023-10-26 01:14:38.870922351Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 26 01:14:38 multinode-204768 crio[963]: time="2023-10-26 01:14:38.886741350Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-lvqzv Namespace:default ID:f7d48e8b61ff98a51be6a68947944f66b4ebf24411c7aaf6241feef8c4a55a28 UID:706f3687-8629-4e40-b422-4c4d2f78daf9 NetNS:/var/run/netns/4bab7654-00a8-40b8-a00e-571a985b94c2 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 26 01:14:38 multinode-204768 crio[963]: time="2023-10-26 01:14:38.886775007Z" level=info msg="Adding pod default_busybox-5bc68d56bd-lvqzv to CNI network \"kindnet\" (type=ptp)"
	Oct 26 01:14:38 multinode-204768 crio[963]: time="2023-10-26 01:14:38.895269616Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-lvqzv Namespace:default ID:f7d48e8b61ff98a51be6a68947944f66b4ebf24411c7aaf6241feef8c4a55a28 UID:706f3687-8629-4e40-b422-4c4d2f78daf9 NetNS:/var/run/netns/4bab7654-00a8-40b8-a00e-571a985b94c2 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 26 01:14:38 multinode-204768 crio[963]: time="2023-10-26 01:14:38.895373979Z" level=info msg="Checking pod default_busybox-5bc68d56bd-lvqzv for CNI network kindnet (type=ptp)"
	Oct 26 01:14:38 multinode-204768 crio[963]: time="2023-10-26 01:14:38.912781461Z" level=info msg="Ran pod sandbox f7d48e8b61ff98a51be6a68947944f66b4ebf24411c7aaf6241feef8c4a55a28 with infra container: default/busybox-5bc68d56bd-lvqzv/POD" id=42270880-48b5-43cf-8f34-3c1e701dca19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 01:14:38 multinode-204768 crio[963]: time="2023-10-26 01:14:38.913939713Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=52721870-2597-40d7-833a-3d1e76c17232 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 01:14:38 multinode-204768 crio[963]: time="2023-10-26 01:14:38.914201059Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=52721870-2597-40d7-833a-3d1e76c17232 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 01:14:38 multinode-204768 crio[963]: time="2023-10-26 01:14:38.914963169Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=01635092-dae4-4a0f-ba43-9dd5dab5a2dc name=/runtime.v1.ImageService/PullImage
	Oct 26 01:14:38 multinode-204768 crio[963]: time="2023-10-26 01:14:38.918981519Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 26 01:14:39 multinode-204768 crio[963]: time="2023-10-26 01:14:39.082526115Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 26 01:14:39 multinode-204768 crio[963]: time="2023-10-26 01:14:39.527343009Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=01635092-dae4-4a0f-ba43-9dd5dab5a2dc name=/runtime.v1.ImageService/PullImage
	Oct 26 01:14:39 multinode-204768 crio[963]: time="2023-10-26 01:14:39.528914064Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=68911e43-55b0-4313-8cd2-601e0143bdfb name=/runtime.v1.ImageService/ImageStatus
	Oct 26 01:14:39 multinode-204768 crio[963]: time="2023-10-26 01:14:39.529511011Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=68911e43-55b0-4313-8cd2-601e0143bdfb name=/runtime.v1.ImageService/ImageStatus
	Oct 26 01:14:39 multinode-204768 crio[963]: time="2023-10-26 01:14:39.530447900Z" level=info msg="Creating container: default/busybox-5bc68d56bd-lvqzv/busybox" id=9b12d95d-7e1e-4c40-bbd6-b90a0e83ee73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 01:14:39 multinode-204768 crio[963]: time="2023-10-26 01:14:39.530557998Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 26 01:14:39 multinode-204768 crio[963]: time="2023-10-26 01:14:39.610305948Z" level=info msg="Created container 4f02285b8c35dc8a5c92797ebcf72c7d63dca670cdd43a41b14c1bae451d4564: default/busybox-5bc68d56bd-lvqzv/busybox" id=9b12d95d-7e1e-4c40-bbd6-b90a0e83ee73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 01:14:39 multinode-204768 crio[963]: time="2023-10-26 01:14:39.611128529Z" level=info msg="Starting container: 4f02285b8c35dc8a5c92797ebcf72c7d63dca670cdd43a41b14c1bae451d4564" id=216bdc45-837c-41d5-b6a9-30b52d2e0775 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 01:14:39 multinode-204768 crio[963]: time="2023-10-26 01:14:39.618929893Z" level=info msg="Started container" PID=2533 containerID=4f02285b8c35dc8a5c92797ebcf72c7d63dca670cdd43a41b14c1bae451d4564 description=default/busybox-5bc68d56bd-lvqzv/busybox id=216bdc45-837c-41d5-b6a9-30b52d2e0775 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7d48e8b61ff98a51be6a68947944f66b4ebf24411c7aaf6241feef8c4a55a28
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4f02285b8c35d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   f7d48e8b61ff9       busybox-5bc68d56bd-lvqzv
	459a7439d8b1a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   0                   dc5fd57d08998       coredns-5dd5756b68-dccqq
	1515f0042891a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   05ff347b7c165       storage-provisioner
	f0a530f520012       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   ee75d28dde479       kindnet-9jtfh
	bfcbc1618f95b       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      About a minute ago   Running             kube-proxy                0                   49418fd144de2       kube-proxy-hkfhh
	8382fc1517326       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      About a minute ago   Running             kube-controller-manager   0                   d9d3c71a74882       kube-controller-manager-multinode-204768
	59e19b5efeb4a       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      About a minute ago   Running             kube-apiserver            0                   8c4066b8eb341       kube-apiserver-multinode-204768
	ac3178839477c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   d3ed1ead13346       etcd-multinode-204768
	289b9202442ef       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      About a minute ago   Running             kube-scheduler            0                   a088a1b4273b2       kube-scheduler-multinode-204768
	
	* 
	* ==> coredns [459a7439d8b1a2bc8f7fd81f77f24bb9ff90be12b58e1bd93b4f2fa331b52e1f] <==
	* [INFO] 10.244.1.2:47940 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115696s
	[INFO] 10.244.0.3:38332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105067s
	[INFO] 10.244.0.3:33549 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001710036s
	[INFO] 10.244.0.3:43861 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069884s
	[INFO] 10.244.0.3:39477 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054918s
	[INFO] 10.244.0.3:59296 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001061781s
	[INFO] 10.244.0.3:56583 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000036836s
	[INFO] 10.244.0.3:37286 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049327s
	[INFO] 10.244.0.3:37086 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003483s
	[INFO] 10.244.1.2:53698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011999s
	[INFO] 10.244.1.2:48597 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007785s
	[INFO] 10.244.1.2:44845 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098937s
	[INFO] 10.244.1.2:34276 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066277s
	[INFO] 10.244.0.3:56655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113918s
	[INFO] 10.244.0.3:57949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115015s
	[INFO] 10.244.0.3:33702 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101171s
	[INFO] 10.244.0.3:54417 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058861s
	[INFO] 10.244.1.2:43875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144724s
	[INFO] 10.244.1.2:51591 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000126059s
	[INFO] 10.244.1.2:56693 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125117s
	[INFO] 10.244.1.2:49735 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115154s
	[INFO] 10.244.0.3:56451 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109133s
	[INFO] 10.244.0.3:55511 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000066844s
	[INFO] 10.244.0.3:50039 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061351s
	[INFO] 10.244.0.3:39020 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051551s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-204768
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-204768
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af1d352f1030f8f3ea7f97e311e7fe82ef319942
	                    minikube.k8s.io/name=multinode-204768
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_26T01_12_52_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 01:12:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-204768
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 01:14:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 01:13:35 +0000   Thu, 26 Oct 2023 01:12:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 01:13:35 +0000   Thu, 26 Oct 2023 01:12:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 01:13:35 +0000   Thu, 26 Oct 2023 01:12:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 01:13:35 +0000   Thu, 26 Oct 2023 01:13:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-204768
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1bb4758e8e940109a7d25f8b7aeec09
	  System UUID:                ebdf1096-e4cd-4c95-9235-9cf6da75076a
	  Boot ID:                    37a42525-bdda-4c41-ac15-6bc286a851a0
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-lvqzv                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-dccqq                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     99s
	  kube-system                 etcd-multinode-204768                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         114s
	  kube-system                 kindnet-9jtfh                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      100s
	  kube-system                 kube-apiserver-multinode-204768             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-controller-manager-multinode-204768    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-proxy-hkfhh                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-scheduler-multinode-204768             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 98s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node multinode-204768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node multinode-204768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node multinode-204768 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node multinode-204768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node multinode-204768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node multinode-204768 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           100s                 node-controller  Node multinode-204768 event: Registered Node multinode-204768 in Controller
	  Normal  NodeReady                69s                  kubelet          Node multinode-204768 status is now: NodeReady
	
	
	Name:               multinode-204768-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-204768-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 01:13:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-204768-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 01:14:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 01:14:36 +0000   Thu, 26 Oct 2023 01:13:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 01:14:36 +0000   Thu, 26 Oct 2023 01:13:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 01:14:36 +0000   Thu, 26 Oct 2023 01:13:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 01:14:36 +0000   Thu, 26 Oct 2023 01:14:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-204768-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 10998a1c77744318b39854a875038943
	  System UUID:                66ea272b-1cdf-4a34-85ce-7afc7e3581d1
	  Boot ID:                    37a42525-bdda-4c41-ac15-6bc286a851a0
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-j4c2s    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-jt5lf               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      53s
	  kube-system                 kube-proxy-q5x8q            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 36s                kube-proxy       
	  Normal  NodeHasSufficientMemory  53s (x5 over 55s)  kubelet          Node multinode-204768-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x5 over 55s)  kubelet          Node multinode-204768-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x5 over 55s)  kubelet          Node multinode-204768-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node multinode-204768-m02 event: Registered Node multinode-204768-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-204768-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004952] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006580] FS-Cache: N-cookie d=00000000f988483e{9p.inode} n=00000000d3a39bfe
	[  +0.008740] FS-Cache: N-key=[8] '8ca00f0200000000'
	[  +0.289092] FS-Cache: Duplicate cookie detected
	[  +0.004711] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006774] FS-Cache: O-cookie d=00000000f988483e{9p.inode} n=00000000bc03619b
	[  +0.007366] FS-Cache: O-key=[8] '95a00f0200000000'
	[  +0.004973] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007947] FS-Cache: N-cookie d=00000000f988483e{9p.inode} n=0000000035663f65
	[  +0.008716] FS-Cache: N-key=[8] '95a00f0200000000'
	[  +5.566645] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct26 01:04] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[  +1.027979] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[  +2.015817] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[Oct26 01:05] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[  +8.187230] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[ +16.126419] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	[ +32.764868] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 6b e3 09 2e 60 e2 63 13 68 95 0e 08 00
	
	* 
	* ==> etcd [ac3178839477cd1e20c5a7a1fcf34402465a439224dbcbbacb1fafc34687e18f] <==
	* {"level":"info","ts":"2023-10-26T01:12:47.21703Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-26T01:12:47.217365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-10-26T01:12:47.217429Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-26T01:12:47.217473Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-26T01:12:47.217473Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-26T01:12:47.217566Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-26T01:12:47.50502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-26T01:12:47.505067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-26T01:12:47.505099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-26T01:12:47.505113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-26T01:12:47.505119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-26T01:12:47.505128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-26T01:12:47.505135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-26T01:12:47.506059Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:12:47.506668Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T01:12:47.506663Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-204768 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-26T01:12:47.506691Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T01:12:47.506864Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-26T01:12:47.506943Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-26T01:12:47.507005Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:12:47.50717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:12:47.507228Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:12:47.507924Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-26T01:12:47.508043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-26T01:13:44.557494Z","caller":"traceutil/trace.go:171","msg":"trace[1259895381] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"240.783009ms","start":"2023-10-26T01:13:44.316697Z","end":"2023-10-26T01:13:44.55748Z","steps":["trace[1259895381] 'process raft request'  (duration: 240.677645ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:14:44 up 57 min,  0 users,  load average: 0.69, 0.75, 0.62
	Linux multinode-204768 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [f0a530f520012b70e3bd9712b258ddccf3a64f32fa290f4c714d8158c1200c1b] <==
	* I1026 01:13:35.410050       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:13:35.410087       1 main.go:227] handling current node
	I1026 01:13:45.420848       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:13:45.420945       1 main.go:227] handling current node
	I1026 01:13:55.433837       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:13:55.433865       1 main.go:227] handling current node
	I1026 01:13:55.433875       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:13:55.433880       1 main.go:250] Node multinode-204768-m02 has CIDR [10.244.1.0/24] 
	I1026 01:13:55.434051       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1026 01:14:05.438431       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:14:05.438455       1 main.go:227] handling current node
	I1026 01:14:05.438465       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:14:05.438469       1 main.go:250] Node multinode-204768-m02 has CIDR [10.244.1.0/24] 
	I1026 01:14:15.451424       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:14:15.451451       1 main.go:227] handling current node
	I1026 01:14:15.451460       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:14:15.451465       1 main.go:250] Node multinode-204768-m02 has CIDR [10.244.1.0/24] 
	I1026 01:14:25.455831       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:14:25.455861       1 main.go:227] handling current node
	I1026 01:14:25.455871       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:14:25.455877       1 main.go:250] Node multinode-204768-m02 has CIDR [10.244.1.0/24] 
	I1026 01:14:35.466178       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:14:35.466230       1 main.go:227] handling current node
	I1026 01:14:35.466244       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:14:35.466252       1 main.go:250] Node multinode-204768-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [59e19b5efeb4aa4f18c74c7552390f045f00237b080c397aa96fcb60d503b7e5] <==
	* I1026 01:12:49.389974       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 01:12:49.390074       1 aggregator.go:166] initial CRD sync complete...
	I1026 01:12:49.390633       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 01:12:49.390666       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 01:12:49.390698       1 cache.go:39] Caches are synced for autoregister controller
	I1026 01:12:49.391465       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 01:12:49.391815       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 01:12:49.391976       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1026 01:12:49.393792       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1026 01:12:49.597761       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 01:12:50.159400       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 01:12:50.162875       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 01:12:50.162892       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 01:12:50.579369       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 01:12:50.611440       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 01:12:50.705207       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 01:12:50.710425       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1026 01:12:50.711454       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 01:12:50.715464       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 01:12:51.322743       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 01:12:51.997720       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 01:12:52.008077       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 01:12:52.016706       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1026 01:13:04.393201       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1026 01:13:05.109527       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [8382fc151732663c9226f3adc2e02b48abc632d67ad0450ea5d414b46f8795e4] <==
	* I1026 01:13:35.794872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.993µs"
	I1026 01:13:36.319683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.354µs"
	I1026 01:13:37.312832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.195636ms"
	I1026 01:13:37.312931       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.295µs"
	I1026 01:13:39.439605       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1026 01:13:51.092364       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-204768-m02\" does not exist"
	I1026 01:13:51.096252       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-204768-m02" podCIDRs=["10.244.1.0/24"]
	I1026 01:13:51.101646       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jt5lf"
	I1026 01:13:51.101697       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q5x8q"
	I1026 01:13:54.442599       1 event.go:307] "Event occurred" object="multinode-204768-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-204768-m02 event: Registered Node multinode-204768-m02 in Controller"
	I1026 01:13:54.442692       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-204768-m02"
	I1026 01:14:36.140890       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-204768-m02"
	I1026 01:14:38.546672       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1026 01:14:38.554995       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-j4c2s"
	I1026 01:14:38.561125       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-lvqzv"
	I1026 01:14:38.569789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.239387ms"
	I1026 01:14:38.573571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.646211ms"
	I1026 01:14:38.573696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.083µs"
	I1026 01:14:38.573769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="39.191µs"
	I1026 01:14:38.579531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.993µs"
	I1026 01:14:39.461709       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-j4c2s" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-j4c2s"
	I1026 01:14:39.688666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.897595ms"
	I1026 01:14:39.688734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.732µs"
	I1026 01:14:40.420075       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.861204ms"
	I1026 01:14:40.420171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.087µs"
	
	* 
	* ==> kube-proxy [bfcbc1618f95bfaadcd7ef4e9626a605979c6bf317d9e52e449814ba64593262] <==
	* I1026 01:13:05.194600       1 server_others.go:69] "Using iptables proxy"
	I1026 01:13:05.212355       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1026 01:13:05.495579       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 01:13:05.500561       1 server_others.go:152] "Using iptables Proxier"
	I1026 01:13:05.500717       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 01:13:05.500785       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 01:13:05.500850       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 01:13:05.501164       1 server.go:846] "Version info" version="v1.28.3"
	I1026 01:13:05.501190       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 01:13:05.502096       1 config.go:315] "Starting node config controller"
	I1026 01:13:05.502553       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 01:13:05.502231       1 config.go:188] "Starting service config controller"
	I1026 01:13:05.502682       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 01:13:05.502259       1 config.go:97] "Starting endpoint slice config controller"
	I1026 01:13:05.502736       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 01:13:05.603030       1 shared_informer.go:318] Caches are synced for node config
	I1026 01:13:05.605118       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 01:13:05.605238       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [289b9202442efbb9bec9d802611c95ebcbb7770abbfcd2c3ea63ae30dfb7701d] <==
	* W1026 01:12:49.397181       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1026 01:12:49.397885       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1026 01:12:49.397275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 01:12:49.397902       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1026 01:12:49.397328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 01:12:49.397918       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1026 01:12:49.398655       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 01:12:49.398723       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 01:12:50.280591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1026 01:12:50.280634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1026 01:12:50.284952       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 01:12:50.284983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 01:12:50.318061       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 01:12:50.318094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1026 01:12:50.337504       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1026 01:12:50.337546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1026 01:12:50.408088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 01:12:50.408125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1026 01:12:50.430650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 01:12:50.430691       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1026 01:12:50.435090       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1026 01:12:50.435129       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1026 01:12:50.565838       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 01:12:50.565877       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1026 01:12:53.812401       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 26 01:13:04 multinode-204768 kubelet[1602]: I1026 01:13:04.490156    1602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41219a25-2f31-49f2-a776-52d56ecfb4cf-xtables-lock\") pod \"kindnet-9jtfh\" (UID: \"41219a25-2f31-49f2-a776-52d56ecfb4cf\") " pod="kube-system/kindnet-9jtfh"
	Oct 26 01:13:04 multinode-204768 kubelet[1602]: I1026 01:13:04.490266    1602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1fb5ef2f-82a6-48b5-bb1b-9f7461ed90ed-kube-proxy\") pod \"kube-proxy-hkfhh\" (UID: \"1fb5ef2f-82a6-48b5-bb1b-9f7461ed90ed\") " pod="kube-system/kube-proxy-hkfhh"
	Oct 26 01:13:04 multinode-204768 kubelet[1602]: I1026 01:13:04.490344    1602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fb5ef2f-82a6-48b5-bb1b-9f7461ed90ed-lib-modules\") pod \"kube-proxy-hkfhh\" (UID: \"1fb5ef2f-82a6-48b5-bb1b-9f7461ed90ed\") " pod="kube-system/kube-proxy-hkfhh"
	Oct 26 01:13:04 multinode-204768 kubelet[1602]: I1026 01:13:04.490462    1602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/41219a25-2f31-49f2-a776-52d56ecfb4cf-cni-cfg\") pod \"kindnet-9jtfh\" (UID: \"41219a25-2f31-49f2-a776-52d56ecfb4cf\") " pod="kube-system/kindnet-9jtfh"
	Oct 26 01:13:04 multinode-204768 kubelet[1602]: I1026 01:13:04.490541    1602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41219a25-2f31-49f2-a776-52d56ecfb4cf-lib-modules\") pod \"kindnet-9jtfh\" (UID: \"41219a25-2f31-49f2-a776-52d56ecfb4cf\") " pod="kube-system/kindnet-9jtfh"
	Oct 26 01:13:04 multinode-204768 kubelet[1602]: I1026 01:13:04.490582    1602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mspr7\" (UniqueName: \"kubernetes.io/projected/41219a25-2f31-49f2-a776-52d56ecfb4cf-kube-api-access-mspr7\") pod \"kindnet-9jtfh\" (UID: \"41219a25-2f31-49f2-a776-52d56ecfb4cf\") " pod="kube-system/kindnet-9jtfh"
	Oct 26 01:13:04 multinode-204768 kubelet[1602]: W1026 01:13:04.746647    1602 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344/crio-49418fd144de282387d3f810fa42caacdbf387922c77acdd75e7e174ae3d9743 WatchSource:0}: Error finding container 49418fd144de282387d3f810fa42caacdbf387922c77acdd75e7e174ae3d9743: Status 404 returned error can't find the container with id 49418fd144de282387d3f810fa42caacdbf387922c77acdd75e7e174ae3d9743
	Oct 26 01:13:04 multinode-204768 kubelet[1602]: W1026 01:13:04.746951    1602 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344/crio-ee75d28dde4793cba854488f964aa78c796758c0eb92d4a91e8b6a0043ebf12a WatchSource:0}: Error finding container ee75d28dde4793cba854488f964aa78c796758c0eb92d4a91e8b6a0043ebf12a: Status 404 returned error can't find the container with id ee75d28dde4793cba854488f964aa78c796758c0eb92d4a91e8b6a0043ebf12a
	Oct 26 01:13:05 multinode-204768 kubelet[1602]: I1026 01:13:05.294205    1602 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9jtfh" podStartSLOduration=1.294150728 podCreationTimestamp="2023-10-26 01:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-26 01:13:05.210085939 +0000 UTC m=+13.237085780" watchObservedRunningTime="2023-10-26 01:13:05.294150728 +0000 UTC m=+13.321150647"
	Oct 26 01:13:06 multinode-204768 kubelet[1602]: I1026 01:13:06.497021    1602 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hkfhh" podStartSLOduration=2.496963772 podCreationTimestamp="2023-10-26 01:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-26 01:13:05.294330157 +0000 UTC m=+13.321329995" watchObservedRunningTime="2023-10-26 01:13:06.496963772 +0000 UTC m=+14.523963616"
	Oct 26 01:13:35 multinode-204768 kubelet[1602]: I1026 01:13:35.755384    1602 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 26 01:13:35 multinode-204768 kubelet[1602]: I1026 01:13:35.778183    1602 topology_manager.go:215] "Topology Admit Handler" podUID="7d126e64-5bdb-4415-a095-5d9411bdfb3d" podNamespace="kube-system" podName="storage-provisioner"
	Oct 26 01:13:35 multinode-204768 kubelet[1602]: I1026 01:13:35.779300    1602 topology_manager.go:215] "Topology Admit Handler" podUID="40c339fe-ec4b-429f-afa8-f305c33e4344" podNamespace="kube-system" podName="coredns-5dd5756b68-dccqq"
	Oct 26 01:13:35 multinode-204768 kubelet[1602]: I1026 01:13:35.917149    1602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7d126e64-5bdb-4415-a095-5d9411bdfb3d-tmp\") pod \"storage-provisioner\" (UID: \"7d126e64-5bdb-4415-a095-5d9411bdfb3d\") " pod="kube-system/storage-provisioner"
	Oct 26 01:13:35 multinode-204768 kubelet[1602]: I1026 01:13:35.917212    1602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcbzk\" (UniqueName: \"kubernetes.io/projected/40c339fe-ec4b-429f-afa8-f305c33e4344-kube-api-access-gcbzk\") pod \"coredns-5dd5756b68-dccqq\" (UID: \"40c339fe-ec4b-429f-afa8-f305c33e4344\") " pod="kube-system/coredns-5dd5756b68-dccqq"
	Oct 26 01:13:35 multinode-204768 kubelet[1602]: I1026 01:13:35.917244    1602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40c339fe-ec4b-429f-afa8-f305c33e4344-config-volume\") pod \"coredns-5dd5756b68-dccqq\" (UID: \"40c339fe-ec4b-429f-afa8-f305c33e4344\") " pod="kube-system/coredns-5dd5756b68-dccqq"
	Oct 26 01:13:35 multinode-204768 kubelet[1602]: I1026 01:13:35.917314    1602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9wm\" (UniqueName: \"kubernetes.io/projected/7d126e64-5bdb-4415-a095-5d9411bdfb3d-kube-api-access-5d9wm\") pod \"storage-provisioner\" (UID: \"7d126e64-5bdb-4415-a095-5d9411bdfb3d\") " pod="kube-system/storage-provisioner"
	Oct 26 01:13:36 multinode-204768 kubelet[1602]: W1026 01:13:36.118636    1602 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344/crio-05ff347b7c165af019f2694e061e0e4df6ca2b36ac883891ab7100f7121f3883 WatchSource:0}: Error finding container 05ff347b7c165af019f2694e061e0e4df6ca2b36ac883891ab7100f7121f3883: Status 404 returned error can't find the container with id 05ff347b7c165af019f2694e061e0e4df6ca2b36ac883891ab7100f7121f3883
	Oct 26 01:13:36 multinode-204768 kubelet[1602]: W1026 01:13:36.118903    1602 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344/crio-dc5fd57d08998bcc15a240bf26ded296a95b2347749e801df0fec0690feb9538 WatchSource:0}: Error finding container dc5fd57d08998bcc15a240bf26ded296a95b2347749e801df0fec0690feb9538: Status 404 returned error can't find the container with id dc5fd57d08998bcc15a240bf26ded296a95b2347749e801df0fec0690feb9538
	Oct 26 01:13:36 multinode-204768 kubelet[1602]: I1026 01:13:36.301341    1602 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=30.301292053 podCreationTimestamp="2023-10-26 01:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-26 01:13:36.301044453 +0000 UTC m=+44.328044293" watchObservedRunningTime="2023-10-26 01:13:36.301292053 +0000 UTC m=+44.328291913"
	Oct 26 01:13:37 multinode-204768 kubelet[1602]: I1026 01:13:37.305727    1602 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-dccqq" podStartSLOduration=32.305653139 podCreationTimestamp="2023-10-26 01:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-26 01:13:36.322263196 +0000 UTC m=+44.349263037" watchObservedRunningTime="2023-10-26 01:13:37.305653139 +0000 UTC m=+45.332652980"
	Oct 26 01:14:38 multinode-204768 kubelet[1602]: I1026 01:14:38.568246    1602 topology_manager.go:215] "Topology Admit Handler" podUID="706f3687-8629-4e40-b422-4c4d2f78daf9" podNamespace="default" podName="busybox-5bc68d56bd-lvqzv"
	Oct 26 01:14:38 multinode-204768 kubelet[1602]: I1026 01:14:38.661488    1602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br2kq\" (UniqueName: \"kubernetes.io/projected/706f3687-8629-4e40-b422-4c4d2f78daf9-kube-api-access-br2kq\") pod \"busybox-5bc68d56bd-lvqzv\" (UID: \"706f3687-8629-4e40-b422-4c4d2f78daf9\") " pod="default/busybox-5bc68d56bd-lvqzv"
	Oct 26 01:14:38 multinode-204768 kubelet[1602]: W1026 01:14:38.910662    1602 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344/crio-f7d48e8b61ff98a51be6a68947944f66b4ebf24411c7aaf6241feef8c4a55a28 WatchSource:0}: Error finding container f7d48e8b61ff98a51be6a68947944f66b4ebf24411c7aaf6241feef8c4a55a28: Status 404 returned error can't find the container with id f7d48e8b61ff98a51be6a68947944f66b4ebf24411c7aaf6241feef8c4a55a28
	Oct 26 01:14:40 multinode-204768 kubelet[1602]: I1026 01:14:40.415162    1602 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-lvqzv" podStartSLOduration=1.801625965 podCreationTimestamp="2023-10-26 01:14:38 +0000 UTC" firstStartedPulling="2023-10-26 01:14:38.914389172 +0000 UTC m=+106.941389005" lastFinishedPulling="2023-10-26 01:14:39.527869983 +0000 UTC m=+107.554869824" observedRunningTime="2023-10-26 01:14:40.41504601 +0000 UTC m=+108.442045868" watchObservedRunningTime="2023-10-26 01:14:40.415106784 +0000 UTC m=+108.442106625"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-204768 -n multinode-204768
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-204768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.4038033973.exe start -p running-upgrade-502124 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.4038033973.exe start -p running-upgrade-502124 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m6.813200849s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-502124 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-502124 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.442887395s)

                                                
                                                
-- stdout --
	* [running-upgrade-502124] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17491
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-502124 in cluster running-upgrade-502124
	* Pulling base image ...
	* Updating the running docker "running-upgrade-502124" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:26:31.126362  190094 out.go:296] Setting OutFile to fd 1 ...
	I1026 01:26:31.126498  190094 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:26:31.126509  190094 out.go:309] Setting ErrFile to fd 2...
	I1026 01:26:31.126518  190094 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:26:31.126719  190094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	I1026 01:26:31.127323  190094 out.go:303] Setting JSON to false
	I1026 01:26:31.128912  190094 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4137,"bootTime":1698279454,"procs":838,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:26:31.128979  190094 start.go:138] virtualization: kvm guest
	I1026 01:26:31.131764  190094 out.go:177] * [running-upgrade-502124] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:26:31.133504  190094 out.go:177]   - MINIKUBE_LOCATION=17491
	I1026 01:26:31.133524  190094 notify.go:220] Checking for updates...
	I1026 01:26:31.135164  190094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:26:31.136702  190094 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:26:31.138264  190094 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	I1026 01:26:31.139729  190094 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:26:31.141171  190094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:26:31.142982  190094 config.go:182] Loaded profile config "running-upgrade-502124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1026 01:26:31.143003  190094 start_flags.go:709] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1026 01:26:31.145063  190094 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1026 01:26:31.146542  190094 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 01:26:31.168680  190094 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1026 01:26:31.168787  190094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:26:31.224730  190094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:77 SystemTime:2023-10-26 01:26:31.214905974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:26:31.224834  190094 docker.go:295] overlay module found
	I1026 01:26:31.226712  190094 out.go:177] * Using the docker driver based on existing profile
	I1026 01:26:31.228280  190094 start.go:298] selected driver: docker
	I1026 01:26:31.228297  190094 start.go:902] validating driver "docker" against &{Name:running-upgrade-502124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-502124 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1026 01:26:31.228371  190094 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:26:31.229188  190094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:26:31.286309  190094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:77 SystemTime:2023-10-26 01:26:31.274980126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:26:31.286702  190094 cni.go:84] Creating CNI manager for ""
	I1026 01:26:31.286725  190094 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1026 01:26:31.286734  190094 start_flags.go:323] config:
	{Name:running-upgrade-502124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-502124 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1026 01:26:31.289755  190094 out.go:177] * Starting control plane node running-upgrade-502124 in cluster running-upgrade-502124
	I1026 01:26:31.291327  190094 cache.go:121] Beginning downloading kic base image for docker with crio
	I1026 01:26:31.292817  190094 out.go:177] * Pulling base image ...
	I1026 01:26:31.294268  190094 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1026 01:26:31.294307  190094 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1026 01:26:31.312744  190094 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1026 01:26:31.312775  190094 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	W1026 01:26:31.331203  190094 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1026 01:26:31.331394  190094 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/running-upgrade-502124/config.json ...
	I1026 01:26:31.331470  190094 cache.go:107] acquiring lock: {Name:mk94662b444b49d249705031d9b9e55f2bbcc880 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:31.331545  190094 cache.go:107] acquiring lock: {Name:mkaa9411c633fe4d078141eb33c38b56fff6634c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:31.331533  190094 cache.go:107] acquiring lock: {Name:mk99fba385363bdf54708c7f54667eab1c169566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:31.331581  190094 cache.go:107] acquiring lock: {Name:mk7e717057f75b85f5e993ef0020fb5b2ddb76d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:31.331603  190094 cache.go:107] acquiring lock: {Name:mkf86b98f8eff2bb70098cd2d40ac9fd54b02e10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:31.331635  190094 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1026 01:26:31.331643  190094 cache.go:194] Successfully downloaded all kic artifacts
	I1026 01:26:31.331651  190094 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 66.785µs
	I1026 01:26:31.331666  190094 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1026 01:26:31.331668  190094 start.go:365] acquiring machines lock for running-upgrade-502124: {Name:mk45984f3ba6d210c948bd2b2390c9275ef4e5a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:31.331652  190094 cache.go:107] acquiring lock: {Name:mk5f489add3bb639ac1546966d8ac0dfa6381252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:31.331694  190094 cache.go:107] acquiring lock: {Name:mk3f5ed37c35fa8f57ba5ab0e1b296745f13f74f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:31.331702  190094 cache.go:107] acquiring lock: {Name:mk845a95c9639b5b49cac8db395cd9b020133a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:31.331730  190094 start.go:369] acquired machines lock for "running-upgrade-502124" in 48.908µs
	I1026 01:26:31.331740  190094 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1026 01:26:31.331752  190094 start.go:96] Skipping create...Using existing machine configuration
	I1026 01:26:31.331753  190094 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1026 01:26:31.331759  190094 fix.go:54] fixHost starting: m01
	I1026 01:26:31.331764  190094 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1026 01:26:31.331752  190094 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 105.778µs
	I1026 01:26:31.331775  190094 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1026 01:26:31.331790  190094 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 259.327µs
	I1026 01:26:31.331800  190094 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1026 01:26:31.331635  190094 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1026 01:26:31.331808  190094 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 287.319µs
	I1026 01:26:31.331816  190094 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1026 01:26:31.331764  190094 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 65.452µs
	I1026 01:26:31.331824  190094 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1026 01:26:31.331681  190094 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 79.497µs
	I1026 01:26:31.331832  190094 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1026 01:26:31.331776  190094 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1026 01:26:31.331668  190094 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1026 01:26:31.331567  190094 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 01:26:31.331846  190094 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 396.687µs
	I1026 01:26:31.331854  190094 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 01:26:31.331774  190094 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 82.947µs
	I1026 01:26:31.331861  190094 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1026 01:26:31.331872  190094 cache.go:87] Successfully saved all images to host disk.
	I1026 01:26:31.332049  190094 cli_runner.go:164] Run: docker container inspect running-upgrade-502124 --format={{.State.Status}}
	I1026 01:26:31.351986  190094 fix.go:102] recreateIfNeeded on running-upgrade-502124: state=Running err=<nil>
	W1026 01:26:31.352016  190094 fix.go:128] unexpected machine state, will restart: <nil>
	I1026 01:26:31.357149  190094 out.go:177] * Updating the running docker "running-upgrade-502124" container ...
	I1026 01:26:31.358764  190094 machine.go:88] provisioning docker machine ...
	I1026 01:26:31.358799  190094 ubuntu.go:169] provisioning hostname "running-upgrade-502124"
	I1026 01:26:31.358867  190094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-502124
	I1026 01:26:31.381587  190094 main.go:141] libmachine: Using SSH client type: native
	I1026 01:26:31.382235  190094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1026 01:26:31.382277  190094 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-502124 && echo "running-upgrade-502124" | sudo tee /etc/hostname
	I1026 01:26:31.516817  190094 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-502124
	
	I1026 01:26:31.516893  190094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-502124
	I1026 01:26:31.545829  190094 main.go:141] libmachine: Using SSH client type: native
	I1026 01:26:31.546425  190094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1026 01:26:31.546472  190094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-502124' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-502124/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-502124' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:26:31.660065  190094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:26:31.660093  190094 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17491-8444/.minikube CaCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17491-8444/.minikube}
	I1026 01:26:31.660138  190094 ubuntu.go:177] setting up certificates
	I1026 01:26:31.660154  190094 provision.go:83] configureAuth start
	I1026 01:26:31.660217  190094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-502124
	I1026 01:26:31.683043  190094 provision.go:138] copyHostCerts
	I1026 01:26:31.683116  190094 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem, removing ...
	I1026 01:26:31.683127  190094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem
	I1026 01:26:31.683184  190094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem (1675 bytes)
	I1026 01:26:31.683291  190094 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem, removing ...
	I1026 01:26:31.683299  190094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem
	I1026 01:26:31.683334  190094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem (1078 bytes)
	I1026 01:26:31.683435  190094 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem, removing ...
	I1026 01:26:31.683443  190094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem
	I1026 01:26:31.683472  190094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem (1123 bytes)
	I1026 01:26:31.683545  190094 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-502124 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-502124]
	I1026 01:26:31.842225  190094 provision.go:172] copyRemoteCerts
	I1026 01:26:31.842300  190094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:26:31.842339  190094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-502124
	I1026 01:26:31.863141  190094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/running-upgrade-502124/id_rsa Username:docker}
	I1026 01:26:31.945846  190094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 01:26:31.964427  190094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1026 01:26:31.982274  190094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:26:32.006146  190094 provision.go:86] duration metric: configureAuth took 345.977977ms
	I1026 01:26:32.006172  190094 ubuntu.go:193] setting minikube options for container-runtime
	I1026 01:26:32.006370  190094 config.go:182] Loaded profile config "running-upgrade-502124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1026 01:26:32.006484  190094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-502124
	I1026 01:26:32.040907  190094 main.go:141] libmachine: Using SSH client type: native
	I1026 01:26:32.041296  190094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1026 01:26:32.041320  190094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:26:32.504928  190094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:26:32.504957  190094 machine.go:91] provisioned docker machine in 1.146171408s
	I1026 01:26:32.504971  190094 start.go:300] post-start starting for "running-upgrade-502124" (driver="docker")
	I1026 01:26:32.504985  190094 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:26:32.505142  190094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:26:32.505218  190094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-502124
	I1026 01:26:32.528150  190094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/running-upgrade-502124/id_rsa Username:docker}
	I1026 01:26:32.621316  190094 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:26:32.624485  190094 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1026 01:26:32.624511  190094 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 01:26:32.624520  190094 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1026 01:26:32.624526  190094 info.go:137] Remote host: Ubuntu 19.10
	I1026 01:26:32.624536  190094 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/addons for local assets ...
	I1026 01:26:32.624580  190094 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/files for local assets ...
	I1026 01:26:32.624658  190094 filesync.go:149] local asset: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem -> 152462.pem in /etc/ssl/certs
	I1026 01:26:32.624752  190094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:26:32.631656  190094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem --> /etc/ssl/certs/152462.pem (1708 bytes)
	I1026 01:26:32.651292  190094 start.go:303] post-start completed in 146.305569ms
	I1026 01:26:32.651392  190094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:26:32.651428  190094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-502124
	I1026 01:26:32.675771  190094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/running-upgrade-502124/id_rsa Username:docker}
	I1026 01:26:32.758329  190094 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 01:26:32.763098  190094 fix.go:56] fixHost completed within 1.431329172s
	I1026 01:26:32.763124  190094 start.go:83] releasing machines lock for "running-upgrade-502124", held for 1.431378675s
	I1026 01:26:32.763194  190094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-502124
	I1026 01:26:32.782882  190094 ssh_runner.go:195] Run: cat /version.json
	I1026 01:26:32.782960  190094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-502124
	I1026 01:26:32.783000  190094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:26:32.783072  190094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-502124
	I1026 01:26:32.805111  190094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/running-upgrade-502124/id_rsa Username:docker}
	I1026 01:26:32.805769  190094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/running-upgrade-502124/id_rsa Username:docker}
	W1026 01:26:32.947614  190094 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1026 01:26:32.947702  190094 ssh_runner.go:195] Run: systemctl --version
	I1026 01:26:32.951781  190094 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:26:33.020066  190094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 01:26:33.025324  190094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:26:33.056684  190094 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1026 01:26:33.056767  190094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:26:33.095414  190094 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:26:33.095435  190094 start.go:472] detecting cgroup driver to use...
	I1026 01:26:33.095473  190094 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1026 01:26:33.095517  190094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:26:33.124410  190094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:26:33.137272  190094 docker.go:198] disabling cri-docker service (if available) ...
	I1026 01:26:33.137329  190094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:26:33.148313  190094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:26:33.159343  190094 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1026 01:26:33.170063  190094 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1026 01:26:33.170123  190094 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:26:33.247479  190094 docker.go:214] disabling docker service ...
	I1026 01:26:33.247553  190094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:26:33.257872  190094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:26:33.269694  190094 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:26:33.352097  190094 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:26:33.435810  190094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:26:33.474412  190094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:26:33.490284  190094 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1026 01:26:33.490404  190094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:26:33.501004  190094 out.go:177] 
	W1026 01:26:33.503015  190094 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1026 01:26:33.503044  190094 out.go:239] * 
	* 
	W1026 01:26:33.504097  190094 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 01:26:33.505850  190094 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-502124 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-26 01:26:33.534159786 +0000 UTC m=+1976.007028690
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-502124
helpers_test.go:235: (dbg) docker inspect running-upgrade-502124:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4dbabba752de7d98e67168862ee6588d9103cfc995d245308516715a1c5a34a1",
	        "Created": "2023-10-26T01:25:24.599012281Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 175374,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:25:25.220778685Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/4dbabba752de7d98e67168862ee6588d9103cfc995d245308516715a1c5a34a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4dbabba752de7d98e67168862ee6588d9103cfc995d245308516715a1c5a34a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/4dbabba752de7d98e67168862ee6588d9103cfc995d245308516715a1c5a34a1/hosts",
	        "LogPath": "/var/lib/docker/containers/4dbabba752de7d98e67168862ee6588d9103cfc995d245308516715a1c5a34a1/4dbabba752de7d98e67168862ee6588d9103cfc995d245308516715a1c5a34a1-json.log",
	        "Name": "/running-upgrade-502124",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-502124:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/298024d7c05463a6df499617abbe05ed1ec0e24a299c20b95a85b61b884c6d77-init/diff:/var/lib/docker/overlay2/02d2062893ae8540a6fa873ac353c042f108bddd55f48f2ac8366f6c820a721a/diff:/var/lib/docker/overlay2/86d255dc1899e465143debab668f47e5d3cc0327cf079eb27b19d4f322917367/diff:/var/lib/docker/overlay2/120def8918cdfb3e05520367d7297660c0e8e1e22bd0aa44239cbbbe69031b74/diff:/var/lib/docker/overlay2/2b9334c29ae33c115035ff8e11ec1ec97dd9a6f0c81ca53ceb83ea3375fcafbf/diff:/var/lib/docker/overlay2/d712e3b31d040efe9f3faca098cf2560288f05da2ffe3a26f83f704746b2a220/diff:/var/lib/docker/overlay2/770f9f34ff2a2b64342e3ce9f38753ef0c3012e9ff88dbb54da802f78d74859e/diff:/var/lib/docker/overlay2/c5a70b2fd07bad1063e621c11f0c02f975f11297d354855d4703b7e7b91dcdb5/diff:/var/lib/docker/overlay2/55031efc6327868a37bff2552c573a8efee61646dcd5fb5a7e4a38b842dee1bc/diff:/var/lib/docker/overlay2/bda69c8dd93906bfaf082c5c76394d1d4c21db19e074b4b26a21f35795ea860b/diff:/var/lib/docker/overlay2/d3acc8
2008489321b6367e5f8964e49b96a2906f508c8d6b3b5d490050d71d4d/diff:/var/lib/docker/overlay2/0ca90f595f00d94a12757671079b25c5f5f743614327940a33dc22fdbcd756ed/diff:/var/lib/docker/overlay2/207c20d2786bcbfa8f109af5187d84c1bd0bf094dfd6244756facc1313fb253f/diff:/var/lib/docker/overlay2/fba33e6a4f1cbd329dc9a28ce06831b01d04764d6c35b96f5eb34488c468de15/diff:/var/lib/docker/overlay2/4c3c2371cff9f9ad30b5030473e09b3bdf67fc9f18e5811323c05df3eedf7036/diff:/var/lib/docker/overlay2/211b104a159e7610db12d6ad70b3456114c64b488540987f443dd08e2638674a/diff:/var/lib/docker/overlay2/4e5ce1ff2d5d336d8f18044fac14353a3bfd91e3a36cd3a0f1f1bf543cd248b8/diff:/var/lib/docker/overlay2/fd22117104122a291c6ad4f0f99e81ec46f1c0e7d7c676fc18826c34c624f56f/diff:/var/lib/docker/overlay2/b930808e86955dc23b48945f275144a8f2c8a30819a2239dff1c5c5999d3b58e/diff:/var/lib/docker/overlay2/0c8fa957c73804d044b0e1156e0f9a49e0071457f8028190d06da7e5208fd7f7/diff:/var/lib/docker/overlay2/d2a563eef861f449aecfeaf61f4e1e18de9705373407a48317fbc94481174d2c/diff:/var/lib/d
ocker/overlay2/d178f7172111b4173c95eeb20c3765d5f49e2e60d62fba6af81860b5211360f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/298024d7c05463a6df499617abbe05ed1ec0e24a299c20b95a85b61b884c6d77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/298024d7c05463a6df499617abbe05ed1ec0e24a299c20b95a85b61b884c6d77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/298024d7c05463a6df499617abbe05ed1ec0e24a299c20b95a85b61b884c6d77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-502124",
	                "Source": "/var/lib/docker/volumes/running-upgrade-502124/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-502124",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-502124",
	                "name.minikube.sigs.k8s.io": "running-upgrade-502124",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "540a9d9d003ed029340422b2b9e8d979e64dd9d443d4761446956f32737fee86",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32953"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32952"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32951"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/540a9d9d003e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "6296dfa0b43395017d9bc3681735cb83eb0af2ce2f32193b76ddcb713a1fc3f4",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "c003e2e2ce09c530a9e7785ee921944d2b3da52da238dba6f9bf7d780500691e",
	                    "EndpointID": "6296dfa0b43395017d9bc3681735cb83eb0af2ce2f32193b76ddcb713a1fc3f4",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-502124 -n running-upgrade-502124
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-502124 -n running-upgrade-502124: exit status 4 (325.271479ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:26:33.850227  191564 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-502124" does not appear in /home/jenkins/minikube-integration/17491-8444/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-502124" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-502124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-502124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-502124: (3.410144865s)
--- FAIL: TestRunningBinaryUpgrade (73.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (77.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.2739680595.exe start -p stopped-upgrade-419792 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.2739680595.exe start -p stopped-upgrade-419792 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m10.864006568s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.2739680595.exe -p stopped-upgrade-419792 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.2739680595.exe -p stopped-upgrade-419792 stop: (1.03988046s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-419792 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-419792 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.728817065s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-419792] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17491
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-419792 in cluster stopped-upgrade-419792
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-419792" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:26:23.263166  188249 out.go:296] Setting OutFile to fd 1 ...
	I1026 01:26:23.263285  188249 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:26:23.263293  188249 out.go:309] Setting ErrFile to fd 2...
	I1026 01:26:23.263298  188249 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:26:23.263488  188249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	I1026 01:26:23.264031  188249 out.go:303] Setting JSON to false
	I1026 01:26:23.265718  188249 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4129,"bootTime":1698279454,"procs":843,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:26:23.265785  188249 start.go:138] virtualization: kvm guest
	I1026 01:26:23.268493  188249 out.go:177] * [stopped-upgrade-419792] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:26:23.269859  188249 out.go:177]   - MINIKUBE_LOCATION=17491
	I1026 01:26:23.269914  188249 notify.go:220] Checking for updates...
	I1026 01:26:23.271479  188249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:26:23.273045  188249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:26:23.274475  188249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	I1026 01:26:23.275883  188249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:26:23.278107  188249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:26:23.280200  188249 config.go:182] Loaded profile config "stopped-upgrade-419792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1026 01:26:23.280238  188249 start_flags.go:709] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1026 01:26:23.282450  188249 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1026 01:26:23.283807  188249 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 01:26:23.308219  188249 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1026 01:26:23.308323  188249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:26:23.370143  188249 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:97 SystemTime:2023-10-26 01:26:23.361005158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:26:23.370290  188249 docker.go:295] overlay module found
	I1026 01:26:23.372350  188249 out.go:177] * Using the docker driver based on existing profile
	I1026 01:26:23.375242  188249 start.go:298] selected driver: docker
	I1026 01:26:23.375262  188249 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-419792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-419792 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1026 01:26:23.375383  188249 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:26:23.376265  188249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:26:23.434158  188249 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:97 SystemTime:2023-10-26 01:26:23.425574974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:26:23.434522  188249 cni.go:84] Creating CNI manager for ""
	I1026 01:26:23.434547  188249 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1026 01:26:23.434559  188249 start_flags.go:323] config:
	{Name:stopped-upgrade-419792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-419792 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1026 01:26:23.436995  188249 out.go:177] * Starting control plane node stopped-upgrade-419792 in cluster stopped-upgrade-419792
	I1026 01:26:23.438755  188249 cache.go:121] Beginning downloading kic base image for docker with crio
	I1026 01:26:23.440275  188249 out.go:177] * Pulling base image ...
	I1026 01:26:23.441509  188249 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1026 01:26:23.441547  188249 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1026 01:26:23.460517  188249 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1026 01:26:23.460541  188249 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	W1026 01:26:23.476768  188249 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1026 01:26:23.476926  188249 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/stopped-upgrade-419792/config.json ...
	I1026 01:26:23.476999  188249 cache.go:107] acquiring lock: {Name:mk94662b444b49d249705031d9b9e55f2bbcc880 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:23.477087  188249 cache.go:107] acquiring lock: {Name:mk99fba385363bdf54708c7f54667eab1c169566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:23.477101  188249 cache.go:107] acquiring lock: {Name:mk3f5ed37c35fa8f57ba5ab0e1b296745f13f74f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:23.477128  188249 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 01:26:23.477141  188249 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 151.911µs
	I1026 01:26:23.477128  188249 cache.go:107] acquiring lock: {Name:mk7e717057f75b85f5e993ef0020fb5b2ddb76d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:23.477168  188249 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1026 01:26:23.477169  188249 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 01:26:23.477003  188249 cache.go:107] acquiring lock: {Name:mkaa9411c633fe4d078141eb33c38b56fff6634c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:23.477036  188249 cache.go:107] acquiring lock: {Name:mk5f489add3bb639ac1546966d8ac0dfa6381252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:23.477073  188249 cache.go:107] acquiring lock: {Name:mkf86b98f8eff2bb70098cd2d40ac9fd54b02e10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:23.477181  188249 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 113.811µs
	I1026 01:26:23.477231  188249 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1026 01:26:23.477207  188249 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1026 01:26:23.477234  188249 cache.go:194] Successfully downloaded all kic artifacts
	I1026 01:26:23.477243  188249 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1026 01:26:23.477243  188249 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 161.634µs
	I1026 01:26:23.477249  188249 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1026 01:26:23.477252  188249 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1026 01:26:23.477252  188249 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 226.227µs
	I1026 01:26:23.477258  188249 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1026 01:26:23.477257  188249 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 192.17µs
	I1026 01:26:23.477262  188249 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1026 01:26:23.477262  188249 start.go:365] acquiring machines lock for stopped-upgrade-419792: {Name:mk8a6da02ca11f91c72e12f41dd779aa486d0e30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:23.477267  188249 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1026 01:26:23.477267  188249 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 175.757µs
	I1026 01:26:23.477285  188249 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1026 01:26:23.477228  188249 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1026 01:26:23.477331  188249 start.go:369] acquired machines lock for "stopped-upgrade-419792" in 58.12µs
	I1026 01:26:23.477350  188249 start.go:96] Skipping create...Using existing machine configuration
	I1026 01:26:23.477355  188249 fix.go:54] fixHost starting: m01
	I1026 01:26:23.477347  188249 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 306.928µs
	I1026 01:26:23.477379  188249 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1026 01:26:23.477225  188249 cache.go:107] acquiring lock: {Name:mk845a95c9639b5b49cac8db395cd9b020133a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:26:23.477491  188249 cache.go:115] /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1026 01:26:23.477501  188249 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 322.869µs
	I1026 01:26:23.477506  188249 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1026 01:26:23.477515  188249 cache.go:87] Successfully saved all images to host disk.
	I1026 01:26:23.477625  188249 cli_runner.go:164] Run: docker container inspect stopped-upgrade-419792 --format={{.State.Status}}
	I1026 01:26:23.495656  188249 fix.go:102] recreateIfNeeded on stopped-upgrade-419792: state=Stopped err=<nil>
	W1026 01:26:23.495705  188249 fix.go:128] unexpected machine state, will restart: <nil>
	I1026 01:26:23.498040  188249 out.go:177] * Restarting existing docker container for "stopped-upgrade-419792" ...
	I1026 01:26:23.499497  188249 cli_runner.go:164] Run: docker start stopped-upgrade-419792
	I1026 01:26:23.767662  188249 cli_runner.go:164] Run: docker container inspect stopped-upgrade-419792 --format={{.State.Status}}
	I1026 01:26:23.785642  188249 kic.go:430] container "stopped-upgrade-419792" state is running.
	I1026 01:26:23.786084  188249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-419792
	I1026 01:26:23.807233  188249 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/stopped-upgrade-419792/config.json ...
	I1026 01:26:23.807460  188249 machine.go:88] provisioning docker machine ...
	I1026 01:26:23.807483  188249 ubuntu.go:169] provisioning hostname "stopped-upgrade-419792"
	I1026 01:26:23.807543  188249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-419792
	I1026 01:26:23.826590  188249 main.go:141] libmachine: Using SSH client type: native
	I1026 01:26:23.827032  188249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32966 <nil> <nil>}
	I1026 01:26:23.827054  188249 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-419792 && echo "stopped-upgrade-419792" | sudo tee /etc/hostname
	I1026 01:26:23.827740  188249 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34810->127.0.0.1:32966: read: connection reset by peer
	I1026 01:26:26.950409  188249 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-419792
	
	I1026 01:26:26.950494  188249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-419792
	I1026 01:26:26.967154  188249 main.go:141] libmachine: Using SSH client type: native
	I1026 01:26:26.967497  188249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32966 <nil> <nil>}
	I1026 01:26:26.967519  188249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-419792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-419792/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-419792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:26:27.078341  188249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:26:27.078372  188249 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17491-8444/.minikube CaCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17491-8444/.minikube}
	I1026 01:26:27.078400  188249 ubuntu.go:177] setting up certificates
	I1026 01:26:27.078413  188249 provision.go:83] configureAuth start
	I1026 01:26:27.078473  188249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-419792
	I1026 01:26:27.097796  188249 provision.go:138] copyHostCerts
	I1026 01:26:27.097880  188249 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem, removing ...
	I1026 01:26:27.097894  188249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem
	I1026 01:26:27.097973  188249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/ca.pem (1078 bytes)
	I1026 01:26:27.098105  188249 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem, removing ...
	I1026 01:26:27.098123  188249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem
	I1026 01:26:27.098161  188249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/cert.pem (1123 bytes)
	I1026 01:26:27.098260  188249 exec_runner.go:144] found /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem, removing ...
	I1026 01:26:27.098273  188249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem
	I1026 01:26:27.098312  188249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17491-8444/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17491-8444/.minikube/key.pem (1675 bytes)
	I1026 01:26:27.098398  188249 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-419792 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-419792]
	I1026 01:26:27.363130  188249 provision.go:172] copyRemoteCerts
	I1026 01:26:27.363197  188249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:26:27.363255  188249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-419792
	I1026 01:26:27.384708  188249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32966 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/stopped-upgrade-419792/id_rsa Username:docker}
	I1026 01:26:27.470780  188249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 01:26:27.492340  188249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1026 01:26:27.514349  188249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 01:26:27.536945  188249 provision.go:86] duration metric: configureAuth took 458.512356ms
	I1026 01:26:27.536976  188249 ubuntu.go:193] setting minikube options for container-runtime
	I1026 01:26:27.537204  188249 config.go:182] Loaded profile config "stopped-upgrade-419792": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1026 01:26:27.537328  188249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-419792
	I1026 01:26:27.559116  188249 main.go:141] libmachine: Using SSH client type: native
	I1026 01:26:27.559609  188249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32966 <nil> <nil>}
	I1026 01:26:27.559657  188249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:26:28.129276  188249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:26:28.129309  188249 machine.go:91] provisioned docker machine in 4.321834517s
	I1026 01:26:28.129318  188249 start.go:300] post-start starting for "stopped-upgrade-419792" (driver="docker")
	I1026 01:26:28.129329  188249 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:26:28.129385  188249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:26:28.129424  188249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-419792
	I1026 01:26:28.146249  188249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32966 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/stopped-upgrade-419792/id_rsa Username:docker}
	I1026 01:26:28.225336  188249 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:26:28.228054  188249 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1026 01:26:28.228086  188249 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 01:26:28.228095  188249 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1026 01:26:28.228102  188249 info.go:137] Remote host: Ubuntu 19.10
	I1026 01:26:28.228111  188249 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/addons for local assets ...
	I1026 01:26:28.228168  188249 filesync.go:126] Scanning /home/jenkins/minikube-integration/17491-8444/.minikube/files for local assets ...
	I1026 01:26:28.228258  188249 filesync.go:149] local asset: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem -> 152462.pem in /etc/ssl/certs
	I1026 01:26:28.228367  188249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:26:28.234749  188249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/ssl/certs/152462.pem --> /etc/ssl/certs/152462.pem (1708 bytes)
	I1026 01:26:28.251649  188249 start.go:303] post-start completed in 122.31521ms
	I1026 01:26:28.251740  188249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:26:28.251790  188249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-419792
	I1026 01:26:28.270380  188249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32966 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/stopped-upgrade-419792/id_rsa Username:docker}
	I1026 01:26:28.346435  188249 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 01:26:28.350707  188249 fix.go:56] fixHost completed within 4.873344798s
	I1026 01:26:28.350731  188249 start.go:83] releasing machines lock for "stopped-upgrade-419792", held for 4.873386437s
	I1026 01:26:28.350805  188249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-419792
	I1026 01:26:28.368357  188249 ssh_runner.go:195] Run: cat /version.json
	I1026 01:26:28.368420  188249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-419792
	I1026 01:26:28.368463  188249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:26:28.368528  188249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-419792
	I1026 01:26:28.386945  188249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32966 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/stopped-upgrade-419792/id_rsa Username:docker}
	I1026 01:26:28.387871  188249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32966 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/stopped-upgrade-419792/id_rsa Username:docker}
	W1026 01:26:28.496111  188249 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1026 01:26:28.496185  188249 ssh_runner.go:195] Run: systemctl --version
	I1026 01:26:28.500193  188249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:26:28.555701  188249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 01:26:28.560191  188249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:26:28.575297  188249 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1026 01:26:28.575378  188249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:26:28.598055  188249 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:26:28.598075  188249 start.go:472] detecting cgroup driver to use...
	I1026 01:26:28.598117  188249 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1026 01:26:28.598157  188249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:26:28.620429  188249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:26:28.629565  188249 docker.go:198] disabling cri-docker service (if available) ...
	I1026 01:26:28.629631  188249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:26:28.639519  188249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:26:28.648389  188249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1026 01:26:28.657491  188249 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1026 01:26:28.657549  188249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:26:28.744163  188249 docker.go:214] disabling docker service ...
	I1026 01:26:28.744227  188249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:26:28.754147  188249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:26:28.763877  188249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:26:28.824639  188249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:26:28.888954  188249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:26:28.898100  188249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:26:28.910666  188249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1026 01:26:28.910714  188249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:26:28.920666  188249 out.go:177] 
	W1026 01:26:28.922113  188249 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1026 01:26:28.922136  188249 out.go:239] * 
	* 
	W1026 01:26:28.922991  188249 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 01:26:28.924499  188249 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-419792 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (77.64s)

                                                
                                    

Test pass (278/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.41
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.3/json-events 5.38
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.21
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
18 TestDownloadOnlyKic 1.29
19 TestBinaryMirror 0.75
20 TestOffline 88.96
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
25 TestAddons/Setup 137.1
27 TestAddons/parallel/Registry 16.62
30 TestAddons/parallel/MetricsServer 5.75
31 TestAddons/parallel/HelmTiller 9.42
33 TestAddons/parallel/CSI 43.39
34 TestAddons/parallel/Headlamp 13.93
35 TestAddons/parallel/CloudSpanner 5.29
36 TestAddons/parallel/LocalPath 54.51
37 TestAddons/parallel/NvidiaDevicePlugin 5.49
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/StoppedEnableDisable 12.22
42 TestCertOptions 27.39
43 TestCertExpiration 240.32
45 TestForceSystemdFlag 31.98
46 TestForceSystemdEnv 43.9
48 TestKVMDriverInstallOrUpdate 3.04
52 TestErrorSpam/setup 24.34
53 TestErrorSpam/start 0.66
54 TestErrorSpam/status 0.89
55 TestErrorSpam/pause 1.53
56 TestErrorSpam/unpause 1.49
57 TestErrorSpam/stop 1.42
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 40.6
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 38.93
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.07
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.8
69 TestFunctional/serial/CacheCmd/cache/add_local 1.19
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
74 TestFunctional/serial/CacheCmd/cache/delete 0.13
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 33.36
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.42
80 TestFunctional/serial/LogsFileCmd 1.4
81 TestFunctional/serial/InvalidService 4.63
83 TestFunctional/parallel/ConfigCmd 0.48
84 TestFunctional/parallel/DashboardCmd 9.55
85 TestFunctional/parallel/DryRun 0.41
86 TestFunctional/parallel/InternationalLanguage 0.17
87 TestFunctional/parallel/StatusCmd 1
91 TestFunctional/parallel/ServiceCmdConnect 7.81
92 TestFunctional/parallel/AddonsCmd 0.16
93 TestFunctional/parallel/PersistentVolumeClaim 40.22
95 TestFunctional/parallel/SSHCmd 0.62
96 TestFunctional/parallel/CpCmd 1.24
97 TestFunctional/parallel/MySQL 25.72
98 TestFunctional/parallel/FileSync 0.3
99 TestFunctional/parallel/CertSync 1.85
103 TestFunctional/parallel/NodeLabels 0.15
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
107 TestFunctional/parallel/License 0.23
108 TestFunctional/parallel/ServiceCmd/DeployApp 9.25
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 23.35
117 TestFunctional/parallel/ServiceCmd/List 0.75
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
120 TestFunctional/parallel/ServiceCmd/Format 0.37
121 TestFunctional/parallel/ServiceCmd/URL 0.37
122 TestFunctional/parallel/Version/short 0.06
123 TestFunctional/parallel/Version/components 0.48
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
128 TestFunctional/parallel/ImageCommands/ImageBuild 2.06
129 TestFunctional/parallel/ImageCommands/Setup 0.9
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 7.28
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.53
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.16
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
140 TestFunctional/parallel/ProfileCmd/profile_list 0.41
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
142 TestFunctional/parallel/MountCmd/any-port 6.15
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.88
144 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.2
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.96
147 TestFunctional/parallel/MountCmd/specific-port 1.99
148 TestFunctional/parallel/MountCmd/VerifyCleanup 2.07
149 TestFunctional/delete_addon-resizer_images 0.07
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 70.82
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.9
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.55
162 TestJSONOutput/start/Command 69.62
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.66
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.61
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.76
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.23
187 TestKicCustomNetwork/create_custom_network 30
188 TestKicCustomNetwork/use_default_bridge_network 26.62
189 TestKicExistingNetwork 26.69
190 TestKicCustomSubnet 27.55
191 TestKicStaticIP 24.71
192 TestMainNoArgs 0.06
193 TestMinikubeProfile 51.07
196 TestMountStart/serial/StartWithMountFirst 5.45
197 TestMountStart/serial/VerifyMountFirst 0.25
198 TestMountStart/serial/StartWithMountSecond 7.94
199 TestMountStart/serial/VerifyMountSecond 0.26
200 TestMountStart/serial/DeleteFirst 1.63
201 TestMountStart/serial/VerifyMountPostDelete 0.25
202 TestMountStart/serial/Stop 1.22
203 TestMountStart/serial/RestartStopped 7.18
204 TestMountStart/serial/VerifyMountPostStop 0.26
207 TestMultiNode/serial/FreshStart2Nodes 128.02
208 TestMultiNode/serial/DeployApp2Nodes 3.69
210 TestMultiNode/serial/AddNode 19.17
211 TestMultiNode/serial/ProfileList 0.29
212 TestMultiNode/serial/CopyFile 9.2
213 TestMultiNode/serial/StopNode 2.12
214 TestMultiNode/serial/StartAfterStop 10.52
215 TestMultiNode/serial/RestartKeepsNodes 110.86
216 TestMultiNode/serial/DeleteNode 4.7
217 TestMultiNode/serial/StopMultiNode 23.9
218 TestMultiNode/serial/RestartMultiNode 78.77
219 TestMultiNode/serial/ValidateNameConflict 26.33
224 TestPreload 146.67
226 TestScheduledStopUnix 97.85
229 TestInsufficientStorage 13.24
232 TestKubernetesUpgrade 360.72
233 TestMissingContainerUpgrade 160.29
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
236 TestNoKubernetes/serial/StartWithK8s 39.34
237 TestNoKubernetes/serial/StartWithStopK8s 14.91
238 TestNoKubernetes/serial/Start 9.67
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
240 TestNoKubernetes/serial/ProfileList 1.61
241 TestNoKubernetes/serial/Stop 1.79
242 TestNoKubernetes/serial/StartNoArgs 6.21
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
244 TestStoppedBinaryUpgrade/Setup 0.43
246 TestStoppedBinaryUpgrade/MinikubeLogs 0.6
255 TestPause/serial/Start 72.3
263 TestNetworkPlugins/group/false 3.63
267 TestPause/serial/SecondStartNoReconfiguration 43.03
269 TestStartStop/group/old-k8s-version/serial/FirstStart 106.42
270 TestPause/serial/Pause 0.65
271 TestPause/serial/VerifyStatus 0.33
272 TestPause/serial/Unpause 0.64
273 TestPause/serial/PauseAgain 0.75
274 TestPause/serial/DeletePaused 4.21
275 TestPause/serial/VerifyDeletedResources 0.64
277 TestStartStop/group/no-preload/serial/FirstStart 56.65
278 TestStartStop/group/no-preload/serial/DeployApp 8.34
279 TestStartStop/group/old-k8s-version/serial/DeployApp 8.39
280 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
281 TestStartStop/group/no-preload/serial/Stop 11.93
282 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
283 TestStartStop/group/old-k8s-version/serial/Stop 11.99
284 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
285 TestStartStop/group/no-preload/serial/SecondStart 335.23
286 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
287 TestStartStop/group/old-k8s-version/serial/SecondStart 426.33
289 TestStartStop/group/embed-certs/serial/FirstStart 70.24
291 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.36
292 TestStartStop/group/embed-certs/serial/DeployApp 8.34
293 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.36
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
295 TestStartStop/group/embed-certs/serial/Stop 11.95
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
297 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.96
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/embed-certs/serial/SecondStart 343.05
300 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
301 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 337.58
302 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.02
303 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
304 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
305 TestStartStop/group/no-preload/serial/Pause 2.8
307 TestStartStop/group/newest-cni/serial/FirstStart 38.46
308 TestStartStop/group/newest-cni/serial/DeployApp 0
309 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
310 TestStartStop/group/newest-cni/serial/Stop 1.25
311 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/newest-cni/serial/SecondStart 25.99
313 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
314 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
316 TestStartStop/group/newest-cni/serial/Pause 2.51
317 TestNetworkPlugins/group/auto/Start 42.99
318 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
320 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
321 TestStartStop/group/old-k8s-version/serial/Pause 2.97
322 TestNetworkPlugins/group/kindnet/Start 44.57
323 TestNetworkPlugins/group/auto/KubeletFlags 0.49
324 TestNetworkPlugins/group/auto/NetCatPod 9.43
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.02
326 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.03
327 TestNetworkPlugins/group/auto/DNS 0.21
328 TestNetworkPlugins/group/auto/Localhost 0.17
329 TestNetworkPlugins/group/auto/HairPin 0.18
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
331 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.33
333 TestStartStop/group/embed-certs/serial/Pause 2.94
334 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.47
337 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
338 TestNetworkPlugins/group/kindnet/NetCatPod 10.51
339 TestNetworkPlugins/group/calico/Start 69.02
340 TestNetworkPlugins/group/custom-flannel/Start 59.03
341 TestNetworkPlugins/group/enable-default-cni/Start 83.62
342 TestNetworkPlugins/group/kindnet/DNS 0.28
343 TestNetworkPlugins/group/kindnet/Localhost 0.29
344 TestNetworkPlugins/group/kindnet/HairPin 0.26
345 TestNetworkPlugins/group/flannel/Start 58.52
346 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
347 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.33
348 TestNetworkPlugins/group/calico/ControllerPod 5.02
349 TestNetworkPlugins/group/custom-flannel/DNS 0.16
350 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
351 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
352 TestNetworkPlugins/group/calico/KubeletFlags 0.31
353 TestNetworkPlugins/group/calico/NetCatPod 9.34
354 TestNetworkPlugins/group/calico/DNS 0.16
355 TestNetworkPlugins/group/calico/Localhost 0.16
356 TestNetworkPlugins/group/calico/HairPin 0.15
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.31
359 TestNetworkPlugins/group/flannel/ControllerPod 5.02
360 TestNetworkPlugins/group/bridge/Start 38.98
361 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
362 TestNetworkPlugins/group/flannel/NetCatPod 11.34
363 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
364 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
365 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
366 TestNetworkPlugins/group/flannel/DNS 0.21
367 TestNetworkPlugins/group/flannel/Localhost 0.17
368 TestNetworkPlugins/group/flannel/HairPin 0.15
369 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
370 TestNetworkPlugins/group/bridge/NetCatPod 9.24
371 TestNetworkPlugins/group/bridge/DNS 0.14
372 TestNetworkPlugins/group/bridge/Localhost 0.14
373 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (6.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-179503 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-179503 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.411772742s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-179503
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-179503: exit status 85 (74.559696ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-179503 | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |          |
	|         | -p download-only-179503        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/26 00:53:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:53:37.634661   15257 out.go:296] Setting OutFile to fd 1 ...
	I1026 00:53:37.634803   15257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 00:53:37.634811   15257 out.go:309] Setting ErrFile to fd 2...
	I1026 00:53:37.634816   15257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 00:53:37.634995   15257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	W1026 00:53:37.635098   15257 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17491-8444/.minikube/config/config.json: open /home/jenkins/minikube-integration/17491-8444/.minikube/config/config.json: no such file or directory
	I1026 00:53:37.635664   15257 out.go:303] Setting JSON to true
	I1026 00:53:37.636529   15257 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2163,"bootTime":1698279454,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:53:37.636595   15257 start.go:138] virtualization: kvm guest
	I1026 00:53:37.639327   15257 out.go:97] [download-only-179503] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:53:37.641105   15257 out.go:169] MINIKUBE_LOCATION=17491
	W1026 00:53:37.639465   15257 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball: no such file or directory
	I1026 00:53:37.639500   15257 notify.go:220] Checking for updates...
	I1026 00:53:37.642961   15257 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:53:37.644651   15257 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 00:53:37.646172   15257 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	I1026 00:53:37.647792   15257 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 00:53:37.650685   15257 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 00:53:37.651000   15257 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 00:53:37.671395   15257 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1026 00:53:37.671470   15257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:53:38.031970   15257 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-10-26 00:53:38.023153486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 00:53:38.032100   15257 docker.go:295] overlay module found
	I1026 00:53:38.034285   15257 out.go:97] Using the docker driver based on user configuration
	I1026 00:53:38.034326   15257 start.go:298] selected driver: docker
	I1026 00:53:38.034335   15257 start.go:902] validating driver "docker" against <nil>
	I1026 00:53:38.034424   15257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:53:38.087039   15257 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-10-26 00:53:38.078472243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 00:53:38.087232   15257 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1026 00:53:38.087899   15257 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1026 00:53:38.088121   15257 start_flags.go:916] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 00:53:38.090316   15257 out.go:169] Using Docker driver with root privileges
	I1026 00:53:38.091963   15257 cni.go:84] Creating CNI manager for ""
	I1026 00:53:38.091979   15257 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 00:53:38.091993   15257 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 00:53:38.092008   15257 start_flags.go:323] config:
	{Name:download-only-179503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-179503 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 00:53:38.093666   15257 out.go:97] Starting control plane node download-only-179503 in cluster download-only-179503
	I1026 00:53:38.093700   15257 cache.go:121] Beginning downloading kic base image for docker with crio
	I1026 00:53:38.095342   15257 out.go:97] Pulling base image ...
	I1026 00:53:38.095374   15257 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1026 00:53:38.095502   15257 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1026 00:53:38.110777   15257 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1026 00:53:38.110982   15257 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1026 00:53:38.111063   15257 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1026 00:53:38.123638   15257 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1026 00:53:38.123661   15257 cache.go:56] Caching tarball of preloaded images
	I1026 00:53:38.123785   15257 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1026 00:53:38.125915   15257 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1026 00:53:38.125933   15257 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1026 00:53:38.161047   15257 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1026 00:53:41.144893   15257 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1026 00:53:41.613335   15257 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1026 00:53:41.613424   15257 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1026 00:53:42.515409   15257 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1026 00:53:42.515782   15257 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/download-only-179503/config.json ...
	I1026 00:53:42.515820   15257 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/download-only-179503/config.json: {Name:mk5fd5db268576fe64a4ff1e56ad3e3f88dd69e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:53:42.516025   15257 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1026 00:53:42.516232   15257 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-179503"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (5.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-179503 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-179503 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.383804565s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (5.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-179503
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-179503: exit status 85 (76.127109ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-179503 | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |          |
	|         | -p download-only-179503        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-179503 | jenkins | v1.31.2 | 26 Oct 23 00:53 UTC |          |
	|         | -p download-only-179503        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/26 00:53:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:53:44.118286   15413 out.go:296] Setting OutFile to fd 1 ...
	I1026 00:53:44.118542   15413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 00:53:44.118550   15413 out.go:309] Setting ErrFile to fd 2...
	I1026 00:53:44.118555   15413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 00:53:44.118735   15413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	W1026 00:53:44.118837   15413 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17491-8444/.minikube/config/config.json: open /home/jenkins/minikube-integration/17491-8444/.minikube/config/config.json: no such file or directory
	I1026 00:53:44.119254   15413 out.go:303] Setting JSON to true
	I1026 00:53:44.120054   15413 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2170,"bootTime":1698279454,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:53:44.120119   15413 start.go:138] virtualization: kvm guest
	I1026 00:53:44.122476   15413 out.go:97] [download-only-179503] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:53:44.124223   15413 out.go:169] MINIKUBE_LOCATION=17491
	I1026 00:53:44.122642   15413 notify.go:220] Checking for updates...
	I1026 00:53:44.125952   15413 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:53:44.127663   15413 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 00:53:44.129346   15413 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	I1026 00:53:44.131000   15413 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 00:53:44.133908   15413 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 00:53:44.134368   15413 config.go:182] Loaded profile config "download-only-179503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1026 00:53:44.134418   15413 start.go:810] api.Load failed for download-only-179503: filestore "download-only-179503": Docker machine "download-only-179503" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1026 00:53:44.134496   15413 driver.go:378] Setting default libvirt URI to qemu:///system
	W1026 00:53:44.134522   15413 start.go:810] api.Load failed for download-only-179503: filestore "download-only-179503": Docker machine "download-only-179503" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1026 00:53:44.156632   15413 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1026 00:53:44.156725   15413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:53:44.206937   15413 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-26 00:53:44.198798757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 00:53:44.207031   15413 docker.go:295] overlay module found
	I1026 00:53:44.209030   15413 out.go:97] Using the docker driver based on existing profile
	I1026 00:53:44.209054   15413 start.go:298] selected driver: docker
	I1026 00:53:44.209059   15413 start.go:902] validating driver "docker" against &{Name:download-only-179503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-179503 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 00:53:44.209195   15413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:53:44.260186   15413 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-26 00:53:44.252095186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 00:53:44.260820   15413 cni.go:84] Creating CNI manager for ""
	I1026 00:53:44.260841   15413 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 00:53:44.260854   15413 start_flags.go:323] config:
	{Name:download-only-179503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-179503 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1026 00:53:44.262954   15413 out.go:97] Starting control plane node download-only-179503 in cluster download-only-179503
	I1026 00:53:44.262969   15413 cache.go:121] Beginning downloading kic base image for docker with crio
	I1026 00:53:44.264300   15413 out.go:97] Pulling base image ...
	I1026 00:53:44.264324   15413 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 00:53:44.264372   15413 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1026 00:53:44.278836   15413 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1026 00:53:44.278976   15413 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1026 00:53:44.278998   15413 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1026 00:53:44.279009   15413 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1026 00:53:44.279022   15413 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1026 00:53:44.293539   15413 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1026 00:53:44.293583   15413 cache.go:56] Caching tarball of preloaded images
	I1026 00:53:44.293741   15413 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 00:53:44.295751   15413 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1026 00:53:44.295764   15413 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1026 00:53:44.329347   15413 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1026 00:53:47.768265   15413 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1026 00:53:47.768347   15413 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17491-8444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1026 00:53:48.702406   15413 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1026 00:53:48.702522   15413 profile.go:148] Saving config to /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/download-only-179503/config.json ...
	I1026 00:53:48.702711   15413 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 00:53:48.702916   15413 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17491-8444/.minikube/cache/linux/amd64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-179503"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-179503
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.29s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-912806 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-912806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-912806
--- PASS: TestDownloadOnlyKic (1.29s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-014731 --alsologtostderr --binary-mirror http://127.0.0.1:40063 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-014731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-014731
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (88.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-100799 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-100799 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m25.483267777s)
helpers_test.go:175: Cleaning up "offline-crio-100799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-100799
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-100799: (3.475823409s)
--- PASS: TestOffline (88.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-211632
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-211632: exit status 85 (64.55413ms)

                                                
                                                
-- stdout --
	* Profile "addons-211632" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-211632"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-211632
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-211632: exit status 85 (63.545559ms)

                                                
                                                
-- stdout --
	* Profile "addons-211632" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-211632"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (137.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-211632 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-211632 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m17.096156197s)
--- PASS: TestAddons/Setup (137.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 15.333365ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-svllb" [6462cf6d-b638-4950-bc58-6d40cfa1a9e9] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012431618s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q4wbt" [77c9316b-3c51-4ba8-8001-81a3132d7651] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.05604325s
addons_test.go:339: (dbg) Run:  kubectl --context addons-211632 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-211632 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-211632 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.720791309s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 ip
2023/10/26 00:56:25 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.62s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 2.967693ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-8pc98" [40138e51-703f-4aa0-b5ec-5392438b711d] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011572671s
addons_test.go:414: (dbg) Run:  kubectl --context addons-211632 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.42s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 12.636516ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-gth4w" [d29c4ef2-76c0-4d9a-bf0f-ff117c9b1924] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012287159s
addons_test.go:472: (dbg) Run:  kubectl --context addons-211632 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-211632 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.84276365s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 4.507173ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-211632 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-211632 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e421f409-489a-4f72-b0bb-cda0c60c345f] Pending
helpers_test.go:344: "task-pv-pod" [e421f409-489a-4f72-b0bb-cda0c60c345f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e421f409-489a-4f72-b0bb-cda0c60c345f] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.010074242s
addons_test.go:583: (dbg) Run:  kubectl --context addons-211632 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-211632 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-211632 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-211632 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-211632 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-211632 delete pod task-pv-pod: (1.133611809s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-211632 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-211632 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-211632 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f444a649-7e1c-4a11-903f-5a7dd840528d] Pending
helpers_test.go:344: "task-pv-pod-restore" [f444a649-7e1c-4a11-903f-5a7dd840528d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f444a649-7e1c-4a11-903f-5a7dd840528d] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00886527s
addons_test.go:625: (dbg) Run:  kubectl --context addons-211632 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-211632 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-211632 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-211632 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.585634267s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-211632 --alsologtostderr -v=1
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-l89p5" [6e47b2e5-16ba-4060-8c8e-fbd8dc256275] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-l89p5" [6e47b2e5-16ba-4060-8c8e-fbd8dc256275] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.009551414s
--- PASS: TestAddons/parallel/Headlamp (13.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-qtjfd" [da47e22e-fc17-4f39-ac47-15851c64b980] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008237317s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-211632
--- PASS: TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.51s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-211632 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-211632 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-211632 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [672eefb5-c76c-4f18-a265-9aca6883722d] Pending
helpers_test.go:344: "test-local-path" [672eefb5-c76c-4f18-a265-9aca6883722d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [672eefb5-c76c-4f18-a265-9aca6883722d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [672eefb5-c76c-4f18-a265-9aca6883722d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.009042776s
addons_test.go:890: (dbg) Run:  kubectl --context addons-211632 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 ssh "cat /opt/local-path-provisioner/pvc-12cb842a-8d18-426c-8f30-ad9da7858417_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-211632 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-211632 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-211632 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-211632 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.611344868s)
--- PASS: TestAddons/parallel/LocalPath (54.51s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bbnbx" [64d4d05e-0610-4bb6-a7cc-53da0eb05823] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.02347606s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-211632
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-211632 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-211632 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-211632
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-211632: (11.931341539s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-211632
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-211632
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-211632
--- PASS: TestAddons/StoppedEnableDisable (12.22s)

                                                
                                    
x
+
TestCertOptions (27.39s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-872100 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-872100 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.751760433s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-872100 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-872100 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-872100 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-872100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-872100
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-872100: (2.035273002s)
--- PASS: TestCertOptions (27.39s)

                                                
                                    
x
+
TestCertExpiration (240.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-884207 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-884207 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (31.654787043s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-884207 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-884207 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.612741188s)
helpers_test.go:175: Cleaning up "cert-expiration-884207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-884207
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-884207: (2.046786211s)
--- PASS: TestCertExpiration (240.32s)

                                                
                                    
x
+
TestForceSystemdFlag (31.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-958909 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-958909 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.319952149s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-958909 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-958909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-958909
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-958909: (2.368703629s)
--- PASS: TestForceSystemdFlag (31.98s)

                                                
                                    
x
+
TestForceSystemdEnv (43.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-138950 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-138950 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.610348694s)
helpers_test.go:175: Cleaning up "force-systemd-env-138950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-138950
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-138950: (5.292948561s)
--- PASS: TestForceSystemdEnv (43.90s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.04s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.04s)

                                                
                                    
x
+
TestErrorSpam/setup (24.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-862679 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-862679 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-862679 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-862679 --driver=docker  --container-runtime=crio: (24.344611756s)
--- PASS: TestErrorSpam/setup (24.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 stop: (1.21412719s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-862679 --log_dir /tmp/nospam-862679 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17491-8444/.minikube/files/etc/test/nested/copy/15246/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-052267 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-052267 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (40.59633581s)
--- PASS: TestFunctional/serial/StartWithProxy (40.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.93s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-052267 --alsologtostderr -v=8
E1026 01:01:09.209450   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:01:09.215536   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:01:09.225908   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:01:09.246974   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:01:09.287351   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:01:09.367821   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:01:09.528266   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:01:09.848981   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:01:10.489440   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:01:11.769946   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:01:14.330874   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:01:19.451166   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-052267 --alsologtostderr -v=8: (38.924387425s)
functional_test.go:659: soft start took 38.925151173s for "functional-052267" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.93s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-052267 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 cache add registry.k8s.io/pause:3.1
E1026 01:01:29.691740   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-052267 cache add registry.k8s.io/pause:3.3: (1.029834946s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-052267 /tmp/TestFunctionalserialCacheCmdcacheadd_local2467746973/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 cache add minikube-local-cache-test:functional-052267
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 cache delete minikube-local-cache-test:functional-052267
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-052267
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-052267 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (278.307821ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 kubectl -- --context functional-052267 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-052267 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-052267 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1026 01:01:50.173075   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-052267 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.358486313s)
functional_test.go:757: restart took 33.358625459s for "functional-052267" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-052267 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-052267 logs: (1.417485118s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 logs --file /tmp/TestFunctionalserialLogsFileCmd376870860/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-052267 logs --file /tmp/TestFunctionalserialLogsFileCmd376870860/001/logs.txt: (1.398198633s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-052267 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-052267
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-052267: exit status 115 (342.754152ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31024 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-052267 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-052267 delete -f testdata/invalidsvc.yaml: (1.054717075s)
--- PASS: TestFunctional/serial/InvalidService (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-052267 config get cpus: exit status 14 (81.607464ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-052267 config get cpus: exit status 14 (72.465877ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-052267 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-052267 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 55737: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-052267 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-052267 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (174.565315ms)

                                                
                                                
-- stdout --
	* [functional-052267] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17491
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:02:50.296137   54782 out.go:296] Setting OutFile to fd 1 ...
	I1026 01:02:50.296405   54782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:02:50.296415   54782 out.go:309] Setting ErrFile to fd 2...
	I1026 01:02:50.296422   54782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:02:50.296638   54782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	I1026 01:02:50.297174   54782 out.go:303] Setting JSON to false
	I1026 01:02:50.298332   54782 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2716,"bootTime":1698279454,"procs":440,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:02:50.298399   54782 start.go:138] virtualization: kvm guest
	I1026 01:02:50.300882   54782 out.go:177] * [functional-052267] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:02:50.303199   54782 out.go:177]   - MINIKUBE_LOCATION=17491
	I1026 01:02:50.303232   54782 notify.go:220] Checking for updates...
	I1026 01:02:50.304744   54782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:02:50.307063   54782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:02:50.308500   54782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	I1026 01:02:50.309898   54782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:02:50.311567   54782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:02:50.313797   54782 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:02:50.314543   54782 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 01:02:50.341429   54782 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1026 01:02:50.341540   54782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:02:50.399696   54782 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-26 01:02:50.39018115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:02:50.399831   54782 docker.go:295] overlay module found
	I1026 01:02:50.402180   54782 out.go:177] * Using the docker driver based on existing profile
	I1026 01:02:50.403730   54782 start.go:298] selected driver: docker
	I1026 01:02:50.403748   54782 start.go:902] validating driver "docker" against &{Name:functional-052267 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-052267 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 01:02:50.403836   54782 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:02:50.405984   54782 out.go:177] 
	W1026 01:02:50.407448   54782 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1026 01:02:50.409004   54782 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-052267 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-052267 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-052267 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (165.918684ms)

                                                
                                                
-- stdout --
	* [functional-052267] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17491
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:02:50.704985   55038 out.go:296] Setting OutFile to fd 1 ...
	I1026 01:02:50.705096   55038 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:02:50.705110   55038 out.go:309] Setting ErrFile to fd 2...
	I1026 01:02:50.705115   55038 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:02:50.705396   55038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	I1026 01:02:50.705968   55038 out.go:303] Setting JSON to false
	I1026 01:02:50.707081   55038 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2716,"bootTime":1698279454,"procs":450,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:02:50.707142   55038 start.go:138] virtualization: kvm guest
	I1026 01:02:50.709942   55038 out.go:177] * [functional-052267] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1026 01:02:50.711627   55038 out.go:177]   - MINIKUBE_LOCATION=17491
	I1026 01:02:50.713060   55038 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:02:50.711643   55038 notify.go:220] Checking for updates...
	I1026 01:02:50.715939   55038 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:02:50.717428   55038 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	I1026 01:02:50.718863   55038 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:02:50.720403   55038 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:02:50.722387   55038 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:02:50.722838   55038 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 01:02:50.745244   55038 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1026 01:02:50.745381   55038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:02:50.797153   55038 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-26 01:02:50.788320483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:02:50.797289   55038 docker.go:295] overlay module found
	I1026 01:02:50.799457   55038 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1026 01:02:50.801105   55038 start.go:298] selected driver: docker
	I1026 01:02:50.801126   55038 start.go:902] validating driver "docker" against &{Name:functional-052267 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-052267 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 01:02:50.801210   55038 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:02:50.803509   55038 out.go:177] 
	W1026 01:02:50.805049   55038 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1026 01:02:50.806510   55038 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-052267 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-052267 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-hjqch" [0b712765-1aa2-49fe-92f9-c54228e3cbcd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-hjqch" [0b712765-1aa2-49fe-92f9-c54228e3cbcd] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.010742985s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32420
functional_test.go:1674: http://192.168.49.2:32420: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-hjqch

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32420
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [379e5931-0b06-4825-bd67-66b8275cea05] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011932959s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-052267 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-052267 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-052267 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-052267 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-052267 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [72c61fc6-d566-48db-b46d-a7975aa4d1eb] Pending
helpers_test.go:344: "sp-pod" [72c61fc6-d566-48db-b46d-a7975aa4d1eb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [72c61fc6-d566-48db-b46d-a7975aa4d1eb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.013877698s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-052267 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-052267 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-052267 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [05891194-a896-4430-84bc-07f2a9ecdcb1] Pending
helpers_test.go:344: "sp-pod" [05891194-a896-4430-84bc-07f2a9ecdcb1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [05891194-a896-4430-84bc-07f2a9ecdcb1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.009943554s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-052267 exec sp-pod -- ls /tmp/mount
2023/10/26 01:03:00 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh -n functional-052267 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 cp functional-052267:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd566902769/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh -n functional-052267 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-052267 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-97fpd" [bad3cd51-5952-455f-b589-9af99c3f34a9] Pending
helpers_test.go:344: "mysql-859648c796-97fpd" [bad3cd51-5952-455f-b589-9af99c3f34a9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-97fpd" [bad3cd51-5952-455f-b589-9af99c3f34a9] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.013128142s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-052267 exec mysql-859648c796-97fpd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-052267 exec mysql-859648c796-97fpd -- mysql -ppassword -e "show databases;": exit status 1 (336.763446ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-052267 exec mysql-859648c796-97fpd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-052267 exec mysql-859648c796-97fpd -- mysql -ppassword -e "show databases;": exit status 1 (174.772605ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-052267 exec mysql-859648c796-97fpd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-052267 exec mysql-859648c796-97fpd -- mysql -ppassword -e "show databases;": exit status 1 (316.073658ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-052267 exec mysql-859648c796-97fpd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15246/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "sudo cat /etc/test/nested/copy/15246/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15246.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "sudo cat /etc/ssl/certs/15246.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15246.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "sudo cat /usr/share/ca-certificates/15246.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/152462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "sudo cat /etc/ssl/certs/152462.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/152462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "sudo cat /usr/share/ca-certificates/152462.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-052267 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-052267 ssh "sudo systemctl is-active docker": exit status 1 (281.478724ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-052267 ssh "sudo systemctl is-active containerd": exit status 1 (328.69634ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-052267 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-052267 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-5h58r" [871edfbb-14f8-4852-9f2a-296c9caf7a10] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-5h58r" [871edfbb-14f8-4852-9f2a-296c9caf7a10] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.053326708s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-052267 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-052267 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-052267 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-052267 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 50054: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-052267 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-052267 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7fe9b850-229e-489a-a6b1-efd8a7ed0468] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7fe9b850-229e-489a-a6b1-efd8a7ed0468] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 23.010892409s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 service list -o json
functional_test.go:1493: Took "613.688223ms" to run "out/minikube-linux-amd64 -p functional-052267 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30866
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30866
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-052267 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-052267
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-052267 image ls --format short --alsologtostderr:
I1026 01:02:53.335665   56425 out.go:296] Setting OutFile to fd 1 ...
I1026 01:02:53.335948   56425 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1026 01:02:53.335958   56425 out.go:309] Setting ErrFile to fd 2...
I1026 01:02:53.335963   56425 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1026 01:02:53.336169   56425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
I1026 01:02:53.336787   56425 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1026 01:02:53.336899   56425 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1026 01:02:53.337288   56425 cli_runner.go:164] Run: docker container inspect functional-052267 --format={{.State.Status}}
I1026 01:02:53.354412   56425 ssh_runner.go:195] Run: systemctl --version
I1026 01:02:53.354459   56425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-052267
I1026 01:02:53.372263   56425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/functional-052267/id_rsa Username:docker}
I1026 01:02:53.458687   56425 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-052267 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.28.3            | bfc896cf80fba | 74.7MB |
| gcr.io/google-containers/addon-resizer  | functional-052267  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 10baa1ca17068 | 123MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 3b85be0b10d38 | 601MB  |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 6d1b4fd1b182d | 61.5MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 5374347291230 | 127MB  |
| docker.io/library/nginx                 | alpine             | b135667c98980 | 49.5MB |
| docker.io/library/nginx                 | latest             | 593aee2afb642 | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-052267 image ls --format table --alsologtostderr:
I1026 01:02:54.386546   56898 out.go:296] Setting OutFile to fd 1 ...
I1026 01:02:54.386664   56898 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1026 01:02:54.386673   56898 out.go:309] Setting ErrFile to fd 2...
I1026 01:02:54.386680   56898 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1026 01:02:54.386875   56898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
I1026 01:02:54.387473   56898 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1026 01:02:54.387603   56898 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1026 01:02:54.387998   56898 cli_runner.go:164] Run: docker container inspect functional-052267 --format={{.State.Status}}
I1026 01:02:54.415356   56898 ssh_runner.go:195] Run: systemctl --version
I1026 01:02:54.415434   56898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-052267
I1026 01:02:54.432870   56898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/functional-052267/id_rsa Username:docker}
I1026 01:02:54.539159   56898 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-052267 image ls --format json --alsologtostderr:
[{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ffd4cfbbe75
3e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-052267"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":["registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"s
ize":"127165392"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e","repoDigests":["docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d","docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"49538855"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":["registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"74691991"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d2
6df5043b79974277c4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"61498678"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb0
6ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"123188534"},{"id":"3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4","repoDigests":["docker.io/library/mysql@sha256:188121394576d05aedb5daf229403bf58d4ee16e04e81828e4d43b72bd227bc2","docker.io/library/mysql@sha256:4f9bfb0f7dd97739ceedb546b381534bb11e9b4abf013d6ad9ae6473fed66099"],"repoTags":["docker.io/library/mysql:5.7"],"size":"600824773"},{"id":"593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0","repoDigests":["docker.io/library/nginx@sha256:0d60ba9498d4491525334696a736b4c19
b56231b972061fab2f536d48ebfd7ce","docker.io/library/nginx@sha256:add4792d930c25dd2abf2ef9ea79de578097a1c175a16ab25814332fe33622de"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-052267 image ls --format json --alsologtostderr:
I1026 01:02:54.120883   56798 out.go:296] Setting OutFile to fd 1 ...
I1026 01:02:54.121024   56798 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1026 01:02:54.121034   56798 out.go:309] Setting ErrFile to fd 2...
I1026 01:02:54.121041   56798 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1026 01:02:54.121344   56798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
I1026 01:02:54.122193   56798 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1026 01:02:54.122347   56798 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1026 01:02:54.122969   56798 cli_runner.go:164] Run: docker container inspect functional-052267 --format={{.State.Status}}
I1026 01:02:54.143164   56798 ssh_runner.go:195] Run: systemctl --version
I1026 01:02:54.143209   56798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-052267
I1026 01:02:54.161153   56798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/functional-052267/id_rsa Username:docker}
I1026 01:02:54.249812   56798 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-052267 image ls --format yaml --alsologtostderr:
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "61498678"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-052267
size: "34114467"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "127165392"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "123188534"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "74691991"
- id: 3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4
repoDigests:
- docker.io/library/mysql@sha256:188121394576d05aedb5daf229403bf58d4ee16e04e81828e4d43b72bd227bc2
- docker.io/library/mysql@sha256:4f9bfb0f7dd97739ceedb546b381534bb11e9b4abf013d6ad9ae6473fed66099
repoTags:
- docker.io/library/mysql:5.7
size: "600824773"
- id: 593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0
repoDigests:
- docker.io/library/nginx@sha256:0d60ba9498d4491525334696a736b4c19b56231b972061fab2f536d48ebfd7ce
- docker.io/library/nginx@sha256:add4792d930c25dd2abf2ef9ea79de578097a1c175a16ab25814332fe33622de
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e
repoDigests:
- docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "49538855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-052267 image ls --format yaml --alsologtostderr:
I1026 01:02:53.564223   56518 out.go:296] Setting OutFile to fd 1 ...
I1026 01:02:53.564346   56518 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1026 01:02:53.564355   56518 out.go:309] Setting ErrFile to fd 2...
I1026 01:02:53.564360   56518 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1026 01:02:53.564563   56518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
I1026 01:02:53.565175   56518 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1026 01:02:53.565290   56518 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1026 01:02:53.565782   56518 cli_runner.go:164] Run: docker container inspect functional-052267 --format={{.State.Status}}
I1026 01:02:53.584651   56518 ssh_runner.go:195] Run: systemctl --version
I1026 01:02:53.584706   56518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-052267
I1026 01:02:53.601939   56518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/functional-052267/id_rsa Username:docker}
I1026 01:02:53.686374   56518 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-052267 ssh pgrep buildkitd: exit status 1 (297.515467ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image build -t localhost/my-image:functional-052267 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-052267 image build -t localhost/my-image:functional-052267 testdata/build --alsologtostderr: (1.539174845s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-052267 image build -t localhost/my-image:functional-052267 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e04fe396d1c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-052267
--> 9547bd0fbf6
Successfully tagged localhost/my-image:functional-052267
9547bd0fbf6cb0b2c22168819873e66f1ff1bc44fc29c986965e54c065d08d32
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-052267 image build -t localhost/my-image:functional-052267 testdata/build --alsologtostderr:
I1026 01:02:54.099010   56788 out.go:296] Setting OutFile to fd 1 ...
I1026 01:02:54.099187   56788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1026 01:02:54.099196   56788 out.go:309] Setting ErrFile to fd 2...
I1026 01:02:54.099201   56788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1026 01:02:54.099370   56788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
I1026 01:02:54.099964   56788 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1026 01:02:54.100482   56788 config.go:182] Loaded profile config "functional-052267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1026 01:02:54.100897   56788 cli_runner.go:164] Run: docker container inspect functional-052267 --format={{.State.Status}}
I1026 01:02:54.119858   56788 ssh_runner.go:195] Run: systemctl --version
I1026 01:02:54.119931   56788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-052267
I1026 01:02:54.138688   56788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/functional-052267/id_rsa Username:docker}
I1026 01:02:54.230847   56788 build_images.go:151] Building image from path: /tmp/build.159548222.tar
I1026 01:02:54.230910   56788 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1026 01:02:54.240080   56788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.159548222.tar
I1026 01:02:54.243377   56788 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.159548222.tar: stat -c "%s %y" /var/lib/minikube/build/build.159548222.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.159548222.tar': No such file or directory
I1026 01:02:54.243413   56788 ssh_runner.go:362] scp /tmp/build.159548222.tar --> /var/lib/minikube/build/build.159548222.tar (3072 bytes)
I1026 01:02:54.301819   56788 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.159548222
I1026 01:02:54.311347   56788 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.159548222 -xf /var/lib/minikube/build/build.159548222.tar
I1026 01:02:54.321645   56788 crio.go:297] Building image: /var/lib/minikube/build/build.159548222
I1026 01:02:54.321720   56788 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-052267 /var/lib/minikube/build/build.159548222 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1026 01:02:55.543389   56788 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-052267 /var/lib/minikube/build/build.159548222 --cgroup-manager=cgroupfs: (1.221639433s)
I1026 01:02:55.543459   56788 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.159548222
I1026 01:02:55.551943   56788 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.159548222.tar
I1026 01:02:55.559828   56788 build_images.go:207] Built localhost/my-image:functional-052267 from /tmp/build.159548222.tar
I1026 01:02:55.559864   56788 build_images.go:123] succeeded building to: functional-052267
I1026 01:02:55.559869   56788 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-052267
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image load --daemon gcr.io/google-containers/addon-resizer:functional-052267 --alsologtostderr
E1026 01:02:31.133941   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-052267 image load --daemon gcr.io/google-containers/addon-resizer:functional-052267 --alsologtostderr: (7.059636782s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image load --daemon gcr.io/google-containers/addon-resizer:functional-052267 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-052267 image load --daemon gcr.io/google-containers/addon-resizer:functional-052267 --alsologtostderr: (4.308307781s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-052267
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image load --daemon gcr.io/google-containers/addon-resizer:functional-052267 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-052267 image load --daemon gcr.io/google-containers/addon-resizer:functional-052267 --alsologtostderr: (4.181911808s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-052267 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.20.177 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-052267 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "338.875813ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "69.05766ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "371.766765ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "70.856331ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-052267 /tmp/TestFunctionalparallelMountCmdany-port3904176724/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698282163839912812" to /tmp/TestFunctionalparallelMountCmdany-port3904176724/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698282163839912812" to /tmp/TestFunctionalparallelMountCmdany-port3904176724/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698282163839912812" to /tmp/TestFunctionalparallelMountCmdany-port3904176724/001/test-1698282163839912812
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-052267 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (324.252483ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 01:02 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 01:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 01:02 test-1698282163839912812
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh cat /mount-9p/test-1698282163839912812
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-052267 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5c2ca055-6681-4756-9b8c-fa23fccbffcf] Pending
helpers_test.go:344: "busybox-mount" [5c2ca055-6681-4756-9b8c-fa23fccbffcf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5c2ca055-6681-4756-9b8c-fa23fccbffcf] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5c2ca055-6681-4756-9b8c-fa23fccbffcf] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.012119943s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-052267 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-052267 /tmp/TestFunctionalparallelMountCmdany-port3904176724/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image save gcr.io/google-containers/addon-resizer:functional-052267 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image rm gcr.io/google-containers/addon-resizer:functional-052267 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-052267
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 image save --daemon gcr.io/google-containers/addon-resizer:functional-052267 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-052267
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-052267 /tmp/TestFunctionalparallelMountCmdspecific-port3535926949/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-052267 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.12195ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-052267 /tmp/TestFunctionalparallelMountCmdspecific-port3535926949/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-052267 ssh "sudo umount -f /mount-9p": exit status 1 (293.180032ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-052267 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-052267 /tmp/TestFunctionalparallelMountCmdspecific-port3535926949/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-052267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3888175978/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-052267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3888175978/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-052267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3888175978/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-052267 ssh "findmnt -T" /mount1: exit status 1 (456.549738ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-052267 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-052267 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-052267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3888175978/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-052267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3888175978/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-052267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3888175978/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-052267
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-052267
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-052267
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (70.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-075799 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1026 01:03:53.054921   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-075799 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m10.816190827s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (70.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.9s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-075799 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-075799 addons enable ingress --alsologtostderr -v=5: (10.895765635s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.90s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-075799 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-067338 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1026 01:07:37.020145   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:07:57.501031   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:08:38.461865   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-067338 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m9.616616279s)
--- PASS: TestJSONOutput/start/Command (69.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-067338 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-067338 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-067338 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-067338 --output=json --user=testUser: (5.757684663s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-504700 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-504700 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.833876ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d1b99fb4-1363-44cc-8dce-a6c9ac377acf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-504700] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ddc1b4f-ee88-4ca5-a595-3bfb2a448632","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17491"}}
	{"specversion":"1.0","id":"8aaee75d-660d-4687-9cba-c1135e5c3f9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"725f7b76-a5f7-4e24-ae43-a0a63caf9014","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig"}}
	{"specversion":"1.0","id":"912ad1da-ea9c-4048-b904-b08525390455","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube"}}
	{"specversion":"1.0","id":"3c746352-713e-47c9-9de0-3491f9cc367a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ed9a05ec-7f1e-458a-910c-60fad839300b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ca4dfd5b-2825-4514-9cda-44e4b8279a72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-504700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-504700
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-327435 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-327435 --network=: (27.907390288s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-327435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-327435
E1026 01:09:25.863210   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:09:25.868498   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:09:25.878760   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:09:25.899048   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:09:25.939378   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:09:26.019783   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:09:26.180253   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:09:26.500913   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:09:27.141991   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-327435: (2.07487233s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.00s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-288303 --network=bridge
E1026 01:09:28.422751   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:09:30.983895   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:09:36.104264   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:09:46.345342   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-288303 --network=bridge: (24.677461664s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-288303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-288303
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-288303: (1.924712057s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.62s)

                                                
                                    
x
+
TestKicExistingNetwork (26.69s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-039747 --network=existing-network
E1026 01:10:00.382427   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:10:06.826329   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-039747 --network=existing-network: (24.595426993s)
helpers_test.go:175: Cleaning up "existing-network-039747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-039747
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-039747: (1.963004113s)
--- PASS: TestKicExistingNetwork (26.69s)

                                                
                                    
x
+
TestKicCustomSubnet (27.55s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-845803 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-845803 --subnet=192.168.60.0/24: (25.424277043s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-845803 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-845803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-845803
E1026 01:10:47.786889   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-845803: (2.108782261s)
--- PASS: TestKicCustomSubnet (27.55s)

                                                
                                    
x
+
TestKicStaticIP (24.71s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-449746 --static-ip=192.168.200.200
E1026 01:11:09.209876   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-449746 --static-ip=192.168.200.200: (22.531001194s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-449746 ip
helpers_test.go:175: Cleaning up "static-ip-449746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-449746
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-449746: (2.035375456s)
--- PASS: TestKicStaticIP (24.71s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (51.07s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-157828 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-157828 --driver=docker  --container-runtime=crio: (21.570069539s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-159690 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-159690 --driver=docker  --container-runtime=crio: (24.438255256s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-157828
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-159690
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-159690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-159690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-159690: (1.859820463s)
helpers_test.go:175: Cleaning up "first-157828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-157828
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-157828: (2.173284168s)
--- PASS: TestMinikubeProfile (51.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-614829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-614829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.444814498s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-614829 ssh -- ls /minikube-host
E1026 01:12:09.707642   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-632991 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1026 01:12:16.539739   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-632991 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.941057774s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-632991 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-614829 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-614829 --alsologtostderr -v=5: (1.626987132s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-632991 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-632991
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-632991: (1.217870677s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-632991
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-632991: (6.177737061s)
--- PASS: TestMountStart/serial/RestartStopped (7.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-632991 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (128.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-204768 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1026 01:12:44.222789   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:14:25.862866   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-204768 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m7.576142313s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (128.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-204768 -- rollout status deployment/busybox: (1.879321785s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-j4c2s -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-lvqzv -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-j4c2s -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-lvqzv -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-j4c2s -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204768 -- exec busybox-5bc68d56bd-lvqzv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-204768 -v 3 --alsologtostderr
E1026 01:14:53.547901   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-204768 -v 3 --alsologtostderr: (18.586583231s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.17s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp testdata/cp-test.txt multinode-204768:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp multinode-204768:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3861448188/001/cp-test_multinode-204768.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp multinode-204768:/home/docker/cp-test.txt multinode-204768-m02:/home/docker/cp-test_multinode-204768_multinode-204768-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m02 "sudo cat /home/docker/cp-test_multinode-204768_multinode-204768-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp multinode-204768:/home/docker/cp-test.txt multinode-204768-m03:/home/docker/cp-test_multinode-204768_multinode-204768-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m03 "sudo cat /home/docker/cp-test_multinode-204768_multinode-204768-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp testdata/cp-test.txt multinode-204768-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp multinode-204768-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3861448188/001/cp-test_multinode-204768-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp multinode-204768-m02:/home/docker/cp-test.txt multinode-204768:/home/docker/cp-test_multinode-204768-m02_multinode-204768.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768 "sudo cat /home/docker/cp-test_multinode-204768-m02_multinode-204768.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp multinode-204768-m02:/home/docker/cp-test.txt multinode-204768-m03:/home/docker/cp-test_multinode-204768-m02_multinode-204768-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m03 "sudo cat /home/docker/cp-test_multinode-204768-m02_multinode-204768-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp testdata/cp-test.txt multinode-204768-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp multinode-204768-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3861448188/001/cp-test_multinode-204768-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp multinode-204768-m03:/home/docker/cp-test.txt multinode-204768:/home/docker/cp-test_multinode-204768-m03_multinode-204768.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768 "sudo cat /home/docker/cp-test_multinode-204768-m03_multinode-204768.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 cp multinode-204768-m03:/home/docker/cp-test.txt multinode-204768-m02:/home/docker/cp-test_multinode-204768-m03_multinode-204768-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 ssh -n multinode-204768-m02 "sudo cat /home/docker/cp-test_multinode-204768-m03_multinode-204768-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-204768 node stop m03: (1.203314438s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-204768 status: exit status 7 (450.532608ms)

                                                
                                                
-- stdout --
	multinode-204768
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-204768-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-204768-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-204768 status --alsologtostderr: exit status 7 (460.788281ms)

                                                
                                                
-- stdout --
	multinode-204768
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-204768-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-204768-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:15:15.907261  117028 out.go:296] Setting OutFile to fd 1 ...
	I1026 01:15:15.907525  117028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:15:15.907535  117028 out.go:309] Setting ErrFile to fd 2...
	I1026 01:15:15.907542  117028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:15:15.907735  117028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	I1026 01:15:15.907931  117028 out.go:303] Setting JSON to false
	I1026 01:15:15.907973  117028 mustload.go:65] Loading cluster: multinode-204768
	I1026 01:15:15.908067  117028 notify.go:220] Checking for updates...
	I1026 01:15:15.908393  117028 config.go:182] Loaded profile config "multinode-204768": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:15:15.908408  117028 status.go:255] checking status of multinode-204768 ...
	I1026 01:15:15.908824  117028 cli_runner.go:164] Run: docker container inspect multinode-204768 --format={{.State.Status}}
	I1026 01:15:15.925651  117028 status.go:330] multinode-204768 host status = "Running" (err=<nil>)
	I1026 01:15:15.925683  117028 host.go:66] Checking if "multinode-204768" exists ...
	I1026 01:15:15.925921  117028 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-204768
	I1026 01:15:15.941666  117028 host.go:66] Checking if "multinode-204768" exists ...
	I1026 01:15:15.941971  117028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:15:15.942019  117028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768
	I1026 01:15:15.958189  117028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768/id_rsa Username:docker}
	I1026 01:15:16.042526  117028 ssh_runner.go:195] Run: systemctl --version
	I1026 01:15:16.046233  117028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:15:16.056313  117028 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:15:16.107326  117028 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-10-26 01:15:16.098332923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:15:16.107923  117028 kubeconfig.go:92] found "multinode-204768" server: "https://192.168.58.2:8443"
	I1026 01:15:16.107946  117028 api_server.go:166] Checking apiserver status ...
	I1026 01:15:16.107985  117028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:15:16.118308  117028 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	I1026 01:15:16.126971  117028 api_server.go:182] apiserver freezer: "6:freezer:/docker/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344/crio/crio-59e19b5efeb4aa4f18c74c7552390f045f00237b080c397aa96fcb60d503b7e5"
	I1026 01:15:16.127052  117028 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/704cc6eb735cf196584d203a621fd6870e0b0e3f9808545cb7993a1ec9708344/crio/crio-59e19b5efeb4aa4f18c74c7552390f045f00237b080c397aa96fcb60d503b7e5/freezer.state
	I1026 01:15:16.135163  117028 api_server.go:204] freezer state: "THAWED"
	I1026 01:15:16.135194  117028 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1026 01:15:16.140509  117028 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1026 01:15:16.140532  117028 status.go:421] multinode-204768 apiserver status = Running (err=<nil>)
	I1026 01:15:16.140541  117028 status.go:257] multinode-204768 status: &{Name:multinode-204768 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 01:15:16.140556  117028 status.go:255] checking status of multinode-204768-m02 ...
	I1026 01:15:16.140788  117028 cli_runner.go:164] Run: docker container inspect multinode-204768-m02 --format={{.State.Status}}
	I1026 01:15:16.158773  117028 status.go:330] multinode-204768-m02 host status = "Running" (err=<nil>)
	I1026 01:15:16.158796  117028 host.go:66] Checking if "multinode-204768-m02" exists ...
	I1026 01:15:16.159057  117028 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-204768-m02
	I1026 01:15:16.174905  117028 host.go:66] Checking if "multinode-204768-m02" exists ...
	I1026 01:15:16.175206  117028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:15:16.175317  117028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204768-m02
	I1026 01:15:16.191896  117028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17491-8444/.minikube/machines/multinode-204768-m02/id_rsa Username:docker}
	I1026 01:15:16.282517  117028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:15:16.293114  117028 status.go:257] multinode-204768-m02 status: &{Name:multinode-204768-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1026 01:15:16.293147  117028 status.go:255] checking status of multinode-204768-m03 ...
	I1026 01:15:16.293452  117028 cli_runner.go:164] Run: docker container inspect multinode-204768-m03 --format={{.State.Status}}
	I1026 01:15:16.310140  117028 status.go:330] multinode-204768-m03 host status = "Stopped" (err=<nil>)
	I1026 01:15:16.310186  117028 status.go:343] host is not running, skipping remaining checks
	I1026 01:15:16.310195  117028 status.go:257] multinode-204768-m03 status: &{Name:multinode-204768-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-204768 node start m03 --alsologtostderr: (9.847716761s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (110.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-204768
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-204768
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-204768: (24.825025278s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-204768 --wait=true -v=8 --alsologtostderr
E1026 01:16:09.210337   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:17:16.539239   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-204768 --wait=true -v=8 --alsologtostderr: (1m25.901649755s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-204768
--- PASS: TestMultiNode/serial/RestartKeepsNodes (110.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-204768 node delete m03: (4.116311055s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 stop
E1026 01:17:32.258688   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-204768 stop: (23.701383012s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-204768 status: exit status 7 (96.722632ms)

                                                
                                                
-- stdout --
	multinode-204768
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-204768-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-204768 status --alsologtostderr: exit status 7 (96.981704ms)

                                                
                                                
-- stdout --
	multinode-204768
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-204768-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:17:46.239868  127732 out.go:296] Setting OutFile to fd 1 ...
	I1026 01:17:46.240111  127732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:17:46.240119  127732 out.go:309] Setting ErrFile to fd 2...
	I1026 01:17:46.240123  127732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:17:46.240292  127732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	I1026 01:17:46.240448  127732 out.go:303] Setting JSON to false
	I1026 01:17:46.240476  127732 mustload.go:65] Loading cluster: multinode-204768
	I1026 01:17:46.240505  127732 notify.go:220] Checking for updates...
	I1026 01:17:46.240841  127732 config.go:182] Loaded profile config "multinode-204768": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:17:46.240854  127732 status.go:255] checking status of multinode-204768 ...
	I1026 01:17:46.241238  127732 cli_runner.go:164] Run: docker container inspect multinode-204768 --format={{.State.Status}}
	I1026 01:17:46.260466  127732 status.go:330] multinode-204768 host status = "Stopped" (err=<nil>)
	I1026 01:17:46.260493  127732 status.go:343] host is not running, skipping remaining checks
	I1026 01:17:46.260499  127732 status.go:257] multinode-204768 status: &{Name:multinode-204768 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 01:17:46.260536  127732 status.go:255] checking status of multinode-204768-m02 ...
	I1026 01:17:46.260767  127732 cli_runner.go:164] Run: docker container inspect multinode-204768-m02 --format={{.State.Status}}
	I1026 01:17:46.278322  127732 status.go:330] multinode-204768-m02 host status = "Stopped" (err=<nil>)
	I1026 01:17:46.278368  127732 status.go:343] host is not running, skipping remaining checks
	I1026 01:17:46.278378  127732 status.go:257] multinode-204768-m02 status: &{Name:multinode-204768-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-204768 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-204768 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.182495984s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204768 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-204768
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-204768-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-204768-m02 --driver=docker  --container-runtime=crio: exit status 14 (81.889416ms)

                                                
                                                
-- stdout --
	* [multinode-204768-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17491
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-204768-m02' is duplicated with machine name 'multinode-204768-m02' in profile 'multinode-204768'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-204768-m03 --driver=docker  --container-runtime=crio
E1026 01:19:25.863777   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-204768-m03 --driver=docker  --container-runtime=crio: (24.052930016s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-204768
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-204768: exit status 80 (275.220602ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-204768
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-204768-m03 already exists in multinode-204768-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-204768-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-204768-m03: (1.861223337s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.33s)

                                                
                                    
x
+
TestPreload (146.67s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-984181 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-984181 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m15.764524125s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-984181 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-984181
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-984181: (5.762129071s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-984181 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1026 01:21:09.210159   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-984181 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m1.876108556s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-984181 image list
helpers_test.go:175: Cleaning up "test-preload-984181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-984181
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-984181: (2.292496098s)
--- PASS: TestPreload (146.67s)

                                                
                                    
x
+
TestScheduledStopUnix (97.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-857292 --memory=2048 --driver=docker  --container-runtime=crio
E1026 01:22:16.539304   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-857292 --memory=2048 --driver=docker  --container-runtime=crio: (21.998247853s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857292 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-857292 -n scheduled-stop-857292
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857292 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857292 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-857292 -n scheduled-stop-857292
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-857292
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857292 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-857292
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-857292: exit status 7 (77.496023ms)

                                                
                                                
-- stdout --
	scheduled-stop-857292
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-857292 -n scheduled-stop-857292
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-857292 -n scheduled-stop-857292: exit status 7 (79.402233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-857292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-857292
E1026 01:23:39.583953   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-857292: (4.363434857s)
--- PASS: TestScheduledStopUnix (97.85s)

                                                
                                    
x
+
TestInsufficientStorage (13.24s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-416376 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-416376 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.82993219s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2f8e54a6-8e12-4678-a7e2-a8a2d7942418","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-416376] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4778e70e-a71b-4192-8aa5-b03ed3c1a0c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17491"}}
	{"specversion":"1.0","id":"2f2a0e53-8af8-4be8-ae5a-47aaf9cfcf3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6df8b1be-7001-43c9-ba67-fcc3e5778870","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig"}}
	{"specversion":"1.0","id":"dbb8f2a4-0150-4060-b372-048f60ce1d6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube"}}
	{"specversion":"1.0","id":"105486ef-86f8-47a6-98a2-f2b051b3d760","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9a50f576-a6c6-40ab-88e1-9cb030546c60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7c5134d5-7efe-4db5-9192-0ca611a28881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"13eaa73b-50f8-45ce-b4b5-cc8d9c80f9b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cd2d700d-405a-4f16-89d3-fff00df4b601","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"29f034f7-f456-4e90-b429-2cf7fd4f0a71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"dc99fe11-c53f-4ffa-8ae2-1042cd9a1732","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-416376 in cluster insufficient-storage-416376","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0b9dc28-e788-4b59-b467-417585922ca0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fe58633-3110-4a96-8798-84953f03993f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"410b3eeb-717a-4646-8243-6aef5049af2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-416376 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-416376 --output=json --layout=cluster: exit status 7 (269.664621ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-416376","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-416376","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:23:52.744872  149792 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-416376" does not appear in /home/jenkins/minikube-integration/17491-8444/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-416376 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-416376 --output=json --layout=cluster: exit status 7 (271.130512ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-416376","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-416376","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:23:53.016921  149878 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-416376" does not appear in /home/jenkins/minikube-integration/17491-8444/kubeconfig
	E1026 01:23:53.026366  149878 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/insufficient-storage-416376/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-416376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-416376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-416376: (1.870760047s)
--- PASS: TestInsufficientStorage (13.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (360.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-747919 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-747919 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (53.651738641s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-747919
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-747919: (3.295422606s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-747919 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-747919 status --format={{.Host}}: exit status 7 (92.81833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-747919 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1026 01:25:48.909020   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
E1026 01:26:09.209419   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-747919 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.825128409s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-747919 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-747919 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-747919 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (83.512157ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-747919] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17491
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-747919
	    minikube start -p kubernetes-upgrade-747919 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7479192 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-747919 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-747919 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-747919 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.656220263s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-747919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-747919
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-747919: (3.051432743s)
--- PASS: TestKubernetesUpgrade (360.72s)

                                                
                                    
x
+
TestMissingContainerUpgrade (160.29s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.1883673028.exe start -p missing-upgrade-491980 --memory=2200 --driver=docker  --container-runtime=crio
E1026 01:24:25.862200   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.1883673028.exe start -p missing-upgrade-491980 --memory=2200 --driver=docker  --container-runtime=crio: (1m22.160699687s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-491980
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-491980
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-491980 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-491980 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m14.211100529s)
helpers_test.go:175: Cleaning up "missing-upgrade-491980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-491980
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-491980: (2.775498325s)
--- PASS: TestMissingContainerUpgrade (160.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-109790 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-109790 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (97.320732ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-109790] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17491
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-109790 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-109790 --driver=docker  --container-runtime=crio: (38.969900769s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-109790 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (14.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-109790 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-109790 --no-kubernetes --driver=docker  --container-runtime=crio: (12.498595486s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-109790 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-109790 status -o json: exit status 2 (342.069859ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-109790","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-109790
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-109790: (2.064457029s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (14.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-109790 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-109790 --no-kubernetes --driver=docker  --container-runtime=crio: (9.666255933s)
--- PASS: TestNoKubernetes/serial/Start (9.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-109790 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-109790 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.961447ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-109790
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-109790: (1.786222071s)
--- PASS: TestNoKubernetes/serial/Stop (1.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-109790 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-109790 --driver=docker  --container-runtime=crio: (6.208641667s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-109790 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-109790 "sudo systemctl is-active --quiet service kubelet": exit status 1 (349.004382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-419792
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.60s)

                                                
                                    
x
+
TestPause/serial/Start (72.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-364666 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-364666 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m12.296997796s)
--- PASS: TestPause/serial/Start (72.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-829682 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-829682 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (171.628054ms)

                                                
                                                
-- stdout --
	* [false-829682] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17491
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:27:13.011537  201258 out.go:296] Setting OutFile to fd 1 ...
	I1026 01:27:13.011668  201258 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:27:13.011678  201258 out.go:309] Setting ErrFile to fd 2...
	I1026 01:27:13.011682  201258 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 01:27:13.011857  201258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17491-8444/.minikube/bin
	I1026 01:27:13.012474  201258 out.go:303] Setting JSON to false
	I1026 01:27:13.014255  201258 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4179,"bootTime":1698279454,"procs":939,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:27:13.014323  201258 start.go:138] virtualization: kvm guest
	I1026 01:27:13.017156  201258 out.go:177] * [false-829682] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:27:13.019120  201258 out.go:177]   - MINIKUBE_LOCATION=17491
	I1026 01:27:13.020680  201258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:27:13.019223  201258 notify.go:220] Checking for updates...
	I1026 01:27:13.023897  201258 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17491-8444/kubeconfig
	I1026 01:27:13.025542  201258 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17491-8444/.minikube
	I1026 01:27:13.027039  201258 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:27:13.028483  201258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:27:13.030487  201258 config.go:182] Loaded profile config "cert-expiration-884207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:27:13.030609  201258 config.go:182] Loaded profile config "kubernetes-upgrade-747919": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:27:13.030715  201258 config.go:182] Loaded profile config "pause-364666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 01:27:13.030813  201258 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 01:27:13.054257  201258 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1026 01:27:13.054348  201258 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:27:13.111521  201258 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-26 01:27:13.101729224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 01:27:13.111652  201258 docker.go:295] overlay module found
	I1026 01:27:13.113779  201258 out.go:177] * Using the docker driver based on user configuration
	I1026 01:27:13.115277  201258 start.go:298] selected driver: docker
	I1026 01:27:13.115294  201258 start.go:902] validating driver "docker" against <nil>
	I1026 01:27:13.115306  201258 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:27:13.117799  201258 out.go:177] 
	W1026 01:27:13.119447  201258 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1026 01:27:13.120998  201258 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-829682 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-829682" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-829682" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:05 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-884207
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:25:48 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-747919
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:10 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.85.2:8443
name: pause-364666
contexts:
- context:
cluster: cert-expiration-884207
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:05 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-884207
name: cert-expiration-884207
- context:
cluster: kubernetes-upgrade-747919
user: kubernetes-upgrade-747919
name: kubernetes-upgrade-747919
- context:
cluster: pause-364666
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:10 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-364666
name: pause-364666
current-context: pause-364666
kind: Config
preferences: {}
users:
- name: cert-expiration-884207
user:
client-certificate: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/cert-expiration-884207/client.crt
client-key: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/cert-expiration-884207/client.key
- name: kubernetes-upgrade-747919
user:
client-certificate: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/kubernetes-upgrade-747919/client.crt
client-key: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/kubernetes-upgrade-747919/client.key
- name: pause-364666
user:
client-certificate: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/pause-364666/client.crt
client-key: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/pause-364666/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-829682

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-829682"

                                                
                                                
----------------------- debugLogs end: false-829682 [took: 3.298380553s] --------------------------------
helpers_test.go:175: Cleaning up "false-829682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-829682
E1026 01:27:16.539451   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/false (3.63s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-364666 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-364666 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.007903811s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (106.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-547123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-547123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m46.415852818s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (106.42s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-364666 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-364666 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-364666 --output=json --layout=cluster: exit status 2 (326.348866ms)

                                                
                                                
-- stdout --
	{"Name":"pause-364666","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-364666","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-364666 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-364666 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.21s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-364666 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-364666 --alsologtostderr -v=5: (4.206784307s)
--- PASS: TestPause/serial/DeletePaused (4.21s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-364666
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-364666: exit status 1 (18.584606ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-364666: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-616842 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1026 01:29:25.862751   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-616842 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (56.653569s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-616842 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [00da721f-3ea8-4a97-b79a-986b849329f8] Pending
helpers_test.go:344: "busybox" [00da721f-3ea8-4a97-b79a-986b849329f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [00da721f-3ea8-4a97-b79a-986b849329f8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.016752365s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-616842 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-547123 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2a3cbbe3-fca9-4993-b460-6d47c9918b50] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2a3cbbe3-fca9-4993-b460-6d47c9918b50] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.015456095s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-547123 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-616842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-616842 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-616842 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-616842 --alsologtostderr -v=3: (11.928162814s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-547123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-547123 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-547123 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-547123 --alsologtostderr -v=3: (11.994799089s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-616842 -n no-preload-616842
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-616842 -n no-preload-616842: exit status 7 (85.387556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-616842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-616842 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-616842 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m34.865223193s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-616842 -n no-preload-616842
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-547123 -n old-k8s-version-547123
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-547123 -n old-k8s-version-547123: exit status 7 (103.61011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-547123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (426.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-547123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-547123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m5.987143587s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-547123 -n old-k8s-version-547123
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (426.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-001658 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-001658 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m10.23587486s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-619154 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1026 01:31:09.209926   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-619154 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m11.362999121s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-001658 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cef9459f-7954-4341-a3e0-84d2fdbe69f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cef9459f-7954-4341-a3e0-84d2fdbe69f7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.017086129s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-001658 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-619154 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9039bb9c-b53d-49a5-9810-ceee5beb87aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9039bb9c-b53d-49a5-9810-ceee5beb87aa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.017135007s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-619154 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-001658 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-001658 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-001658 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-001658 --alsologtostderr -v=3: (11.954239406s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-619154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-619154 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-619154 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-619154 --alsologtostderr -v=3: (11.956150435s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-001658 -n embed-certs-001658
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-001658 -n embed-certs-001658: exit status 7 (83.252178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-001658 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (343.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-001658 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-001658 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m42.596647326s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-001658 -n embed-certs-001658
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (343.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-619154 -n default-k8s-diff-port-619154
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-619154 -n default-k8s-diff-port-619154: exit status 7 (86.956564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-619154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-619154 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1026 01:32:16.539753   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
E1026 01:34:12.259229   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
E1026 01:34:25.862634   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/ingress-addon-legacy-075799/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-619154 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m37.132932108s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-619154 -n default-k8s-diff-port-619154
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tkdhk" [d987e629-6f77-46dc-a798-928d2ddcff58] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tkdhk" [d987e629-6f77-46dc-a798-928d2ddcff58] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.018141247s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tkdhk" [d987e629-6f77-46dc-a798-928d2ddcff58] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010329156s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-616842 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-616842 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-616842 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-616842 -n no-preload-616842
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-616842 -n no-preload-616842: exit status 2 (304.449138ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-616842 -n no-preload-616842
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-616842 -n no-preload-616842: exit status 2 (305.612645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-616842 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-616842 -n no-preload-616842
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-616842 -n no-preload-616842
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-478657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1026 01:36:09.210162   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/addons-211632/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-478657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (38.457413798s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-478657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-478657 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-478657 --alsologtostderr -v=3: (1.248000969s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-478657 -n newest-cni-478657
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-478657 -n newest-cni-478657: exit status 7 (80.057734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-478657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-478657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-478657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (25.667146183s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-478657 -n newest-cni-478657
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-478657 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-478657 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-478657 -n newest-cni-478657
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-478657 -n newest-cni-478657: exit status 2 (300.200604ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-478657 -n newest-cni-478657
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-478657 -n newest-cni-478657: exit status 2 (298.003066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-478657 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-478657 -n newest-cni-478657
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-478657 -n newest-cni-478657
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.990665317s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rqhn9" [da3fbfb6-5e45-4c1c-a8af-68f6e0af2657] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017156199s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rqhn9" [da3fbfb6-5e45-4c1c-a8af-68f6e0af2657] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009847144s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-547123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-547123 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-547123 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-547123 -n old-k8s-version-547123
E1026 01:37:16.539311   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/functional-052267/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-547123 -n old-k8s-version-547123: exit status 2 (309.192143ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-547123 -n old-k8s-version-547123
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-547123 -n old-k8s-version-547123: exit status 2 (307.931947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-547123 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-547123 -n old-k8s-version-547123
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-547123 -n old-k8s-version-547123
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (44.569062405s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-829682 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-829682 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vpnxb" [d2b76b60-fb4f-4e00-b1cd-57936145fac6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vpnxb" [d2b76b60-fb4f-4e00-b1cd-57936145fac6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.012622779s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pzznw" [f2867be2-eb3d-4516-a832-dfcca8f0649d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pzznw" [f2867be2-eb3d-4516-a832-dfcca8f0649d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.018903997s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qw6mw" [08a6d087-bfca-4efe-b781-79322b2eadad] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qw6mw" [08a6d087-bfca-4efe-b781-79322b2eadad] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.032077997s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-829682 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pzznw" [f2867be2-eb3d-4516-a832-dfcca8f0649d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009859992s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-001658 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qw6mw" [08a6d087-bfca-4efe-b781-79322b2eadad] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01036366s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-619154 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-001658 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-001658 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-001658 -n embed-certs-001658
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-001658 -n embed-certs-001658: exit status 2 (329.007827ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-001658 -n embed-certs-001658
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-001658 -n embed-certs-001658: exit status 2 (324.065959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-001658 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-001658 -n embed-certs-001658
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-001658 -n embed-certs-001658
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-c5pwt" [d16a4562-bbcd-4f00-964e-c0807fd5cf27] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.019835656s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-619154 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-619154 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-619154 -n default-k8s-diff-port-619154
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-619154 -n default-k8s-diff-port-619154: exit status 2 (335.479065ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-619154 -n default-k8s-diff-port-619154
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-619154 -n default-k8s-diff-port-619154: exit status 2 (361.611023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-619154 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-619154 -n default-k8s-diff-port-619154
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-619154 -n default-k8s-diff-port-619154
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-829682 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-829682 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s6knt" [017bece2-f520-4993-8c19-035a4dd09df2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s6knt" [017bece2-f520-4993-8c19-035a4dd09df2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.012830095s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m9.0182664s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (59.029947128s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m23.620217972s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-829682 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.520013285s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-829682 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-829682 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g4q6z" [164be9a1-1591-4cb0-97bc-ee5961b12d13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g4q6z" [164be9a1-1591-4cb0-97bc-ee5961b12d13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.010400251s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-w7nwv" [0751be6e-d947-410b-886b-7be9669afda5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.021159885s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-829682 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-829682 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-829682 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5p8j8" [5af418ac-419e-45b0-81db-db8719031b95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 01:39:31.449296   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
E1026 01:39:31.454601   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
E1026 01:39:31.464858   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
E1026 01:39:31.485156   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
E1026 01:39:31.525574   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
E1026 01:39:31.606023   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
E1026 01:39:31.766413   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-5p8j8" [5af418ac-419e-45b0-81db-db8719031b95] Running
E1026 01:39:32.087086   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
E1026 01:39:32.727785   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
E1026 01:39:34.008734   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
E1026 01:39:36.569760   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.011112391s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-829682 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-829682 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-829682 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bt49d" [fde6e911-b799-457e-b12c-cb7ac2a9b17f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 01:39:40.195117   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/old-k8s-version-547123/client.crt: no such file or directory
E1026 01:39:41.695130   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/no-preload-616842/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-bt49d" [fde6e911-b799-457e-b12c-cb7ac2a9b17f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.010124509s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dntf6" [17e23a66-d975-452f-8189-f8274ab81703] Running
E1026 01:39:42.755970   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/old-k8s-version-547123/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.020352989s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-829682 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (38.978312826s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-829682 "pgrep -a kubelet"
E1026 01:39:47.877166   15246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/old-k8s-version-547123/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-829682 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4ckc7" [779fb2b7-1f0a-4618-a9dc-3d836f6d6824] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4ckc7" [779fb2b7-1f0a-4618-a9dc-3d836f6d6824] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.010392685s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-829682 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-829682 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-829682 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-829682 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m8t5j" [50b5b95b-7f70-41f0-bd99-c009936ac611] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-m8t5j" [50b5b95b-7f70-41f0-bd99-c009936ac611] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.009385884s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-829682 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-829682 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (24/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-094837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-094837
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-829682 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-829682" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-829682" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:05 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-884207
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:25:48 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-747919
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:10 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.85.2:8443
name: pause-364666
contexts:
- context:
cluster: cert-expiration-884207
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:05 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-884207
name: cert-expiration-884207
- context:
cluster: kubernetes-upgrade-747919
user: kubernetes-upgrade-747919
name: kubernetes-upgrade-747919
- context:
cluster: pause-364666
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:10 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-364666
name: pause-364666
current-context: pause-364666
kind: Config
preferences: {}
users:
- name: cert-expiration-884207
user:
client-certificate: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/cert-expiration-884207/client.crt
client-key: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/cert-expiration-884207/client.key
- name: kubernetes-upgrade-747919
user:
client-certificate: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/kubernetes-upgrade-747919/client.crt
client-key: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/kubernetes-upgrade-747919/client.key
- name: pause-364666
user:
client-certificate: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/pause-364666/client.crt
client-key: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/pause-364666/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-829682

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-829682"

                                                
                                                
----------------------- debugLogs end: kubenet-829682 [took: 3.516412666s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-829682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-829682
--- SKIP: TestNetworkPlugins/group/kubenet (3.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-829682 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-829682" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:05 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-884207
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:25:48 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-747919
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17491-8444/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:10 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.85.2:8443
name: pause-364666
contexts:
- context:
cluster: cert-expiration-884207
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:05 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-884207
name: cert-expiration-884207
- context:
cluster: kubernetes-upgrade-747919
user: kubernetes-upgrade-747919
name: kubernetes-upgrade-747919
- context:
cluster: pause-364666
extensions:
- extension:
last-update: Thu, 26 Oct 2023 01:27:10 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-364666
name: pause-364666
current-context: pause-364666
kind: Config
preferences: {}
users:
- name: cert-expiration-884207
user:
client-certificate: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/cert-expiration-884207/client.crt
client-key: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/cert-expiration-884207/client.key
- name: kubernetes-upgrade-747919
user:
client-certificate: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/kubernetes-upgrade-747919/client.crt
client-key: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/kubernetes-upgrade-747919/client.key
- name: pause-364666
user:
client-certificate: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/pause-364666/client.crt
client-key: /home/jenkins/minikube-integration/17491-8444/.minikube/profiles/pause-364666/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-829682

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-829682" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829682"

                                                
                                                
----------------------- debugLogs end: cilium-829682 [took: 3.803844726s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-829682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-829682
--- SKIP: TestNetworkPlugins/group/cilium (3.97s)

                                                
                                    
Copied to clipboard